Insight System
Administration Manual
Version 2.0
This document records the daily, weekly and monthly system
administration jobs and detailed steps, also descripts the
software architecture and data process procedure, which will
help system administrators to understand Insight system
operation methodology.
AIX Support
10/1/2012
Machine Mode:
9117-570
SN: 10A32FD
Contents
System Summary ............................................................................................................................................................... 0
Start Insight System on Server ifx01 .................................................................................................................................. 1
Shutdown Insight system on Server ifx01 ......................................................................................................................... 3
Daily System Administration Operation ............................................................................................................................ 4
Data Loading .................................................................................................................................................................. 5
Archive and Clean Log file & directory .......................................................................................................................... 5
Clean up the log and ptr files ......................................................................................................................................... 6
System and application backup ..................................................................................................................................... 6
Perl script funnel files .................................................................................................................................................... 6
Daily cron job backup .................................................................................................................................................... 6
Monitor .......................................................................................................................................................................... 7
Weekly System Administration Operation ........................................................................................................................ 7
Monthly System Administration Operation ...................................................................................................................... 8
Archive database structure and tables (Tariff is not included) ..................................................................................... 8
Prepare chunk files on OS level for Database Server(ardb) ........................................................................................ 12
Add chunk files to dbspaces on Database Server(ardb) .............................................................................................. 12
Purge Archive DB(ip_arch01) older than 7 years ........................................................................................................ 12
Create Database (ip_arch05) ....................................................................................................................................... 13
Re-creating the schema of new created database(ip_arch05) ................................................................................... 14
The archived tables...................................................................................................................................................... 14
Switch Archive Database, when 1.5 year limits hit ..................................................................................................... 15
Archive and purge steps .............................................................................................................................................. 20
Archive data from production database server (ipdb) to archive database server (ardb) ......................................... 22
Start archive database server .................................................................................................................................. 23
Run autoarchive.ksh scripts ..................................................................................................................................... 23
Confirm the right data is archived on ip_arch04@ardb .......................................................................................... 23
Backup Archive database server (ardb) ....................................................................................................................... 31
Ask operator to insert Archive Tape Set (A/B/C/D) into ifx01 tape drive ............................................................... 31
Bring ardb instance down ........................................................................................................................................ 33
Ask operator to send the backup tape set to Iron Mountain .................................................................................. 33
Purge B3 data on production database server: ipdb, database: ip_0p ....................................................................... 33
Pick A weekend to purge b3 in production database is strongly recommended.................................................... 33
Start from 2013-05-03 ............................................................................................................................................. 34
Stop all data load program (runner); make sure there is no TCL running. .............................................................. 34
Wait till there is no ‘tcl’ program running ............................................................................................................... 35
Start Purge program ................................................................................................................................................ 35
Confirm the purge process completed successfully ................................................................................................ 35
Clean the RUNNER log ............................................................................................................................................. 35
Uncomment all data load programs (restart runner) .............................................................................................. 35
Archive Storage consideration: ................................................................................................................................... 36
Insight (ifx01) DRP procedure .......................................................................................................................................... 37
Step 1: Restore the basic OS (rootvg) via OS backup tape .......................................................................................... 37
Step 2: Setup System file systems environment for Application restoration ............................................................. 53
Step 3: Restore Application ......................................................................................................................................... 55
Step 4: Setup Informix database Restore Environment .............................................................................................. 57
Step 5: Restore Informix database .............................................................................................................................. 58
Step 6: Bring Database and application online............................................................................................................ 59
For Archive database server ardb DRP consideration ................................................................................................. 59
Download data in archive database server (ardb) for oldshipment ............................................................................... 63
Insight DB refresh (from ifx01 to ipdev) procedure ........................................................................................................ 66
Manually Re-load data files ............................................................................................................................................. 70
/var/adm/wtmp (who temp file) too large...................................................................................................................... 70
There are no subheaders, line, or recaps in the informix database for this transaction number. Please reload it. ....... 70
hs_duty_rate table refresh .............................................................................................................................................. 70
Data processing LOGS ...................................................................................................................................................... 71
Storage consideration and Add Database storage space(chunck) .................................................................................. 71
All addition chunks to /ix_dat4 ........................................................................................................................................ 72
Reclaiming Unused Space Within an Extent .................................................................................................................... 73
Connect Session ............................................................................................................................................................... 75
Monitoring locks .............................................................................................................................................................. 75
Release the log files ......................................................................................................................................................... 77
To quickly load a large, existing standard table .............................................................................................................. 78
To quickly load a new, large table ................................................................................................................................... 79
What happens between a client and server when a TCP/IP connection is opened ........................................................ 79
Strategy for estimating the size of the physical log ......................................................................................................... 80
The onstat -g rea command............................................................................................................................................. 81
The onstat -g ioq command ............................................................................................................................................. 81
Create Intermdiate data to hold WIPs recycled b3iid record on IPDEV Database ip_arch ............................................. 81
Map File of Informix table fileds to Locus record ............................................................................................................ 84
IPDEV informix configuration: ......................................................................................................................................... 94
Applications and Scripts ................................................................................................................................................ 123
Practice One: Migrate informix from Product server to test server and Configuring continuous log restore with
ontape ............................................................................................................................................................................ 191
Call IBM 1-800-426-7378 to open a hardware ticket .................................................................................................... 223
Practice Two: Setup test informix database on Redhat Linux 5.8 with VMWARE ........................................................ 227
Add disk space ........................................................................................................................................................... 227
Share directory on Windows ..................................................................................................................................... 227
Install informix 11.7 on RH Linux5.8 64-bit ............................................................................................................... 233
Configure Linux System for informix ......................................................................................................................... 238
Load Tables between two instance/database using unload/load utility .................................................................. 239
Adjust the size of log files to prevent long transactions ........................................................................................... 241
Add more tempdbs space to build (set) contrains, indexs for a large table. ............................................................ 241
Load Table between two instance/database using SQL ............................................................................................ 241
When Using TEMP table, Add more tempdbs space ................................................................................................. 249
Turn on database ip_0p log mode(you may need to set instance to single user mode) .......................................... 251
Insert large table piece by piece using rowid ............................................................................................................ 252
Archive and Purge B3 Table ....................................................................................................................................... 255
NIM ................................................................................................................................................................................ 261
Setup NIM Environment ............................................................................................................................................ 261
How NIM Works ........................................................................................................................................................ 261
Define NIM Resource ................................................................................................................................................. 261
Manage NIM resource ............................................................................................................................................... 264
Install/Migrate OS/software ...................................................................................................................................... 265
What you can do on HMC with Web-based System Manager ...................................................................................... 273
What is vi? ................................................................................................................................................................... 288
To Get Into and Out Of vi ..................................................................................................................................... 288
To Start vi .............................................................................................................................................................. 288
To Exit vi ............................................................................................................................................................... 288
Moving the Cursor ................................................................................................................................................. 288
Screen Manipulation .............................................................................................................................................. 289
Adding, Changing, and Deleting Text ................................................................................................................... 289
Inserting or Adding Text ....................................................................................................................................... 289
Changing Text ....................................................................................................................................................... 290
Deleting Text ......................................................................................................................................................... 290
Cutting and Pasting Text ....................................................................................................................................... 290
Other Commands ................................................................................................................................................... 291
Searching Text ....................................................................................................................................................... 291
Determining Line Numbers ................................................................................................................................... 291
Saving and Reading Files ...................................................................................................................................... 291
Exploring the Sysmaster Database ................................................................................................................................ 293
1. A Practical Example - Who is Using What Database ............................................................................................ 293
2. How the Sysmaster Database is Created ............................................................................................................... 296
Supported SMI Tables ........................................................................................................................................... 297
Differences From Other Databases ........................................................................................................................ 299
3. Server Information ................................................................................................................................................. 299
Server configuration and statistics tables: ............................................................................................................. 299
4. Dbspace and Chunk Information ........................................................................................................................... 301
Displaying Free Dbspace ....................................................................................................................................... 302
Displaying Chunk Status ....................................................................................................................................... 303
5. Database and Table Information............................................................................................................................ 306
IO Performance of Tables...................................................................................................................................... 309
6. User Session Information ...................................................................................................................................... 310
7. Some Unsupported Extras ..................................................................................................................................... 316
Conclusion ................................................................................................................................................................. 318
Document Acceptance and Sign-off ........................................................................................................................ 321
Revision History .......................................................................................................................................................... 322
Document Purpose ..................................................................................................................................................... 323
Design Assumptions and Dependencies ................................................................................................................ 323
Design Considerations and Constraints .................................................................................................................. 323
Component/Application (Process) Design .............................................................................................................. 323
Process Model Diagram ........................................................................................................................................ 324
Process Descriptions .............................................................................................................................................. 324
6.2.1 The funnel files .............................................................................................................................................. 324
6.3 The VAX file parser .......................................................................................................................................... 326
6.3.1 The Key file .................................................................................................................................................... 327
6.3.2 The output files .............................................................................................................................................. 327
6.3.3 The log files.................................................................................................................................................... 328
6.3.4 Not covered by this document .................................................................................................................... 328
Issues & Action Items ................................................................................................................................................. 329
Glossary ....................................................................................................................................................................... 329
Appendix ...................................................................................................................................................................... 330
Example ................................................................................................................................................................... 330
run xwin on laptop as root, ........................................................................................................................................... 331
Start Nagios ................................................................................................................................................................. 331
Configure network tuning parameters...................................................................................................................... 332
System Summary
Insight System on Server ifx01 is Tuxedo data processing application, Tuxedo date reader
(clients) makes requests, and Tuxedo database update services (servers) provide services
and responses.
The original data files are transferred from VMS Locus System. A script program Runner is
used to start Tuxedo client to read the all these data files, and Tuxedo Client makes data
update request to Tuxedo Server services. Tuxedo Server updates database tables according
to client Queue requests.
Informix server v11.5 is the database server to store the data from VMS Locus application,
and provide data service to Insight Web Application.
USER INFORMATION:
INSIGHT COMPLIANCE CENTER CDN:
https://insight.livingstonintl.com/insight/SECURE/WKBMAIN.ASPX
USER ID: ICCCGCINC
PASSWORD: ICCCGCINC
https://insightdocs.livingstonintl.com/ca/SECURE/ETCWKBMAIN.ASPX
USER NAME: DCKINNEAR
PASSWORD: DCKINNEAR
DBeaver JDBC connection settings
jdbc:informix-sqli://ifx01:6900/ip_arch05:INFORMIXSERVER=ardb
jdbc:informix-sqli://ifx01:6800/sysadmin:INFORMIXSERVER=ipdb
jdbc:informix-sqli://ipdev:6600/ip_systest:INFORMIXSERVER=systestdb
jdbc:informix-sqli://ipdev:6900/ip_arch:INFORMIXSERVER=artestdb
jdbc:informix-sqli://ibmserver:25337/ip_0p:INFORMIXSERVER=ol_informix1170
Drivers:
hmc04
192.168.103.144
hscroot/
abc6688
root/
root@ifx
informix/
infxrmvb(ifx01)/infdev(ipdev)
ipgown/
ipgown2007
tuxedo/
tuxedoadmin
dmqvax/
newdmqvax
Informix-PDC
IBM Informix Dynamic Server
com.informix.jdbc.IfxDriver
Start Insight System on Server ifx01
1. Boot up server ifx01, login system as root user, and check and confirm following OS level
environment configuration:
# hostname
ifx01
# ifconfig a
en0:
inet 192.168.108.60 netmask 0xffffff00 broadcast 192.168.108.255
# netstat rn
default 192.168.108.254 UG 11 263906779 en0
# nslookup
> server
Default server: 192.168.100.7
Address: 192.168.100.7#53
# lsvg o
archdbvg
livedbvg
appsvg
rootvg
TIPS: Setup OS configuration for Insight system manually:
Login as root
# hostname ifx01
# ifconfig en0 192.168.108.60
TIP: When Network file transfer (FTP/RCP) very slow
root@ifx01:/ #ifconfig -a
en0:
flags=1e080863,c0<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,6
4BIT,CHECKSUM_OFFLOAD(ACTIVE),LARGESEND,CHAIN>
inet 192.168.108.60 netmask 0xffffff00 broadcast 192.168.108.255
tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 0
en1:
flags=1e080863,c0<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,6
4BIT,CHECKSUM_OFFLOAD(ACTIVE),LARGESEND,CHAIN>
inet 192.168.105.61 netmask 0xffffff00 broadcast 192.168.105.255
tcp_sendspace 131072 tcp_recvspace 65536 rfc1323 0
lo0:
flags=e08084b,c0<UP,BROADCAST,LOOPBACK,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BI
T,LARGESEND,CHAIN>
inet 127.0.0.1 netmask 0xff000000 broadcast 127.255.255.255
inet6 ::1%1/0
tcp_sendspace 131072 tcp_recvspace 131072 rfc1323 1
#ifconfig en1 detach
#smitty device -> communication -> Ethernet Adapter ->
Change / Show Characteristics of an Ethernet Adapter
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
Ethernet Adapter ent1
Description 2-Port 10/100/1000 Base-TX PCI-X Ada>
Status Available
Location 04-09
Transmit jumbo frames no +
Enable hardware transmit and receive checksum yes +
Media speed 1000_Full_Duplex +
Enable ALTERNATE ETHERNET address no +
ALTERNATE ETHERNET address [0x000000000000] +
Apply change to DATABASE only no +
Enable failover mode disable +
Change / Show Characteristics of an Ethernet Adapter
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
Ethernet Adapter ent1
Description 2-Port 10/100/1000 Ba>
Status Available
Location 04-09
Transmit jumbo frames no +
Enable hardware transmit and receive checksum yes +
Media speed Auto_Negotiation +
Enable ALTERNATE ETHERNET address no +
ALTERNATE ETHERNET address [0x000000000000] +
Apply change to DATABASE only +
Enable failover mode disable +
There are some different ways to approach this problem, but the easiest would be to
turn off large_send and chcksum_offload on ifix01 and permanently enable tcp_nodelay 1 on that
interface while you are at it, as you must ifconfig down the interface in order to make the changes at the
device level.
So you would have to:
On ifx01
# ifconfig en0 down detach( this will bring communications down on en0)
# chdev -l ent0 -a large_send=no -a chksum_offload=no
# chdev -l en0 -a tcp_nodelay=1
# lsattr -El ent0
# lsattr -El en0
Confirm setting has changed, then bring interface up.
# ifconfig en0 up
2. Start Informix Database Server: ipdb
Login as USER Informix
$ . ./ids115.env ipdb
$ oninit
3. Run Tuxedo Application:
login as ipgown
$ cd /usr/apps/ipg/ver001/srv/locus
$ . ./setenv.locus
$ tmboot y
Shutdown Insight system on Server ifx01
4. Comment the data loading lines on root crontab, and wait till NO data loading program
(tcl) is running.
# ps ef | grep tcl
TIPS: Remember to uncomment all these lines when you start Insight system.
5. Shutdown Tuxedo Application:
# tmshutdown -y
6. Shutdown Informix:
Login as USER Informix
$ onmode ky
Daily System Administration Operation
Insight System Daily Operation jobs are run by OS automatically, which scheduled in root
crontab, the jobs include:
Data Loading
Archive log file directory
Clean up the log and ptr files
System and application backup
Perl script funnel files
Daily cron job backup
Monitor
Data Loading
Load/Update data files from LOCUS (and/or Coda) into Informix database.
Jobs
Scripts name
scheduled running time
logs
Email Distribution
Data
Loading
/insight/local/scripts/runner.10.ksh
From 6:00AM to 7:59AM,
Every Minute
/dmqjtmp/archiveRunnerLog/runner.10.out
AIXSupport@livingstonintl.com
/insight/local/scripts/runner.all.ksh
From 8:00AM to 20:59PM,
Every Minute
/dmqjtmp/archiveRunnerLog/runner.all.out
AIXSupport@livingstonintl.com
/insight/local/scripts/runner.71.ksh
From 22:20PM to 22:40PM,
Every Minute
/dmqjtmp/archiveRunnerLog/runner.71.out
AIXSupport@livingstonintl.com
/insight/local/scripts/iccdataupload/StartInsightUploa
d.ksh
From 8:00AM to 20:59PM,
Every hour at minute
1,26,31,46
/insight/local/scripts/iccdataupload/StartInsight
Upload.out
ekotsalainen@livingstonintl.com
lchen@livingstonintl.com
/insight/local/scripts/ICCSetExpiryDates/StartInsightSe
tExpiryDates.ksh
From 8:00AM to 20:59PM,
Every hour at minute
2,27,32,47
/insight/local/scripts/ICCSetExpiryDates/StartIns
ightSetExpiryDates.out
cstanciu@livingstonintl.com
lchen@livingstonintl.com
/insight/local/scripts/ICCBillingUpload/StartInsightBilli
ngUpload.ksh
15:01PM
/insight/local/scripts/ICCBillingUpload/StartInsig
htBillingUpload.out
cstanciu@livingstonintl.com
lchen@livingstonintl.com
Archive and Clean Log file & directory
Archive Tuxedo instance ULOG files, processed data files, token files, ierr*, iaud*, dmqlog.*, B3*.log log files and output files day by day,
and clean archived files & directories 3 days before.
Jobs
Scripts name
scheduled running time
logs
Email Distribution
Archive Log
file &
directory
/insight/local/scripts/tuxLogClean.ksh
6:30AM
/dmqjtmp/archiveTuxLog/tuxLogClean.out
N/A
/insight/local/scripts/oprLogClean.ksh
23:30PM
/dmqjtmp/archiveOprLog/oprLogClean.out
N/A
/insight/local/scripts/betaLogClean.ksh
23:30PM
/dmqjtmp/archiveBetaLog/betaLogClean.out
N/A
/insight/local/scripts/bdsLogClean.ksh
4:05AM
/dmqjtmp/archiveBdsLog/bdsLogClean.out
N/A
Clean up the log and ptr files
Clean Application and Database logs one week before, and clean system backup logs one month before.
Clean /usr/apps/dmq/beta/dmqptr_*.ptr and /usr/apps/dmq/*.MMA 3 days before.
Jobs
Scripts name
scheduled running time
logs
Email Distribution
Clean up the
log and ptr
files
/insight/local/scripts/ptrLogClean.ksh
6:30AM
/dmqjtmp/ptrCleanLog/ptrLogClean.out
N/A
/insight/local/scripts/bkupLogClean.ksh
6:30AM
/dmqjtmp/bkupCleanLog/bkupLogClean.out
N/A
/insight/local/scripts/sqexplainClean.ksh
6:30AM
N/A
System and application backup
Backup Application (OS Filesystems) and Informix database
Jobs
Scripts name
scheduled running time
logs
Email Distribution
System and
application
backup
ksh c “/insight/local/backup/appbkup.ksh rmt0"
1:05AM,except Monday
morning
/dmqjtmp/archiveAppbkupLog/appbkup.out
computerops@livingstonintl.com
AIXSupport@livingstonintl.com
ksh -c "/insight/local/backup/dbsbkup.ksh"
su - informix -c
"/insight/local/backup/infbkup.ksh"
4:01AM, except Monday
morning
/dmqjtmp/archiveDbsbkupLog/dbsbkup.out
/dmqjtmp/archiveDbsbkupLog/infbkup.out
computerops@livingstonintl.com
AIXSupport@livingstonintl.com
Perl script funnel files
Get Tuxedo process Result Report
Jobs
Scripts name
scheduled running time
logs
Email Distribution
Perl script
funnel files
/insight/local/scripts/getTxnRpt.pl
6:30AM
/dmqjtmp/archiveFfileLog/getTxnRpt.out
N/A
Daily cron job backup
Jobs
Scripts name
scheduled running time
logs
Email Distribution
backup cron
table
/insight/local/scripts/cron_bkup.ksh
3:00AM
/insight/local/crontabs/*
N/A
Monitor
Monitor the system error report message and Insight subsystem error message
Jobs
Scripts name
scheduled running time
logs
Email Distribution
Monitor
/insight/local/scripts/alertDog.ksh
From Monday to Friday, Every
hour at minute 10
/dmqjtmp/archiveAdogLog/alertDog.out
computerops@livingstonintl.com
AIXSupport@livingstonintl.com
/insight/local/scripts/watchDog.ksh
From Monday to Friday, Every
hour at minute 30
/dmqjtmp/archiveWdogLog/watchDog.out
computerops@livingstonintl.com
AIXSupport@livingstonintl.com
Weekly System Administration Operation
Insight System weekly Operation jobs are run by OS automatically, which scheduled in root crontab, the jobs include:
System Backup (mksysb)
Jobs
Scripts name
scheduled running time
logs
Email Distribution
System
Backup
ksh -c "/insight/local/backup/sysbkup.ksh rmt0"
23:05PM, Saturday
/dmqjtmp/archiveSysbkupLog/sysbkup.out
computerops@livingstonintl.com
AIXSupport@livingstonintl.com
WHTR31169_ifx01-bkup
1. File systems (and/or directories/files) to be backup each day, fully or incremental?
all drives
2. schedule( start time/how long it takes) of each file systems’ backup?
Daily incremental starts at 1am on : Monday-Tuesday-Wednesday-Thursday-Friday-Sunday (around 1 to 3 hours to complete)
Full backup starts at 1pm on Saturday (around 2 hours to complete)
3. retention(how long to keep) of each file systems’ backup?
On disks: 35 days (temporary set to 60 days)
On tapes : Monthly full 390 days Yearly full 2555 days
Monthly System Administration Operation
Insight System monthly Operation jobs, archive and purge one month Insight database
data in table B3 and related tables 18 months ago from production database to archive
database, are run manually, which normally scheduled on the third Thursday every
month.
Data from 2007/05 to 2008/10 is on database ip_arch01@ardb
Date from 2008/11 to 2010/04 is on database ip_arch02@ardb
Date from 2010/05 to 2011/10 is on database ip_arch03@ardb
Started from 2013-05-03
Date from 2011/11 to 2013/04 is on database ip_arch04@ardb
Started from 2014-11-02
Date from 2013/05 to 2014/10 is on database ip_arch05@ardb
Started from 2016-05-01
Date from 2014/11 to 2016/04 is on database ip_arch06@ardb(now)
Started from 2017-11-05
Date from 2016/05 to 2017/10 is on database ip_arch07@ardb
Started from 2019-05-05
Date from 2017/11 to 2019/04 is on database ip_arch08@ardb
….
Note: Every archive database holds 18 months (1.5 year) data
Archive database structure and tables (Tariff is not included)
TABLE
ip_arch03
ip_arch04
ip_arch05
account_contact
0
0
accountcontact_iid
0
0
as_accounted
0
0
as_accounted_iid
0
0
as_claimed
0
0
as_claimed_iid
0
0
b3
4968513
292330
0
b3_iid
0
0
b3_lincmt_iid_1
0
0
b3_lincmt_iid_2
0
0
b3_lincmt_iid_3
0
0
b3_lincmt_iid_4
0
0
b3_lincmt_iid_5
0
0
b3_line
31857472
2055028
0
b3_line_comment
3360
212
0
b3_line_dia
0
0
b3_line_iid
0
0
b3_line_iid_1
0
0
b3_line_iid_2
0
0
b3_line_iid_3
0
0
b3_line_iid_4
0
0
b3_line_iid_5
0
0
b3_line_vio
0
0
b3_linecomment_iid
0
0
b3_rcpdet_iid_1
0
0
b3_rcpdet_iid_2
0
0
b3_rcpdet_iid_3
0
0
b3_rcpdet_iid_4
0
0
b3_rcpdet_iid_5
0
0
b3_recap_details
62343047
4043047
0
b3_recapdetail_iid
0
0
b3_subhdr_iid
0
0
b3_subhdr_iid_1
0
0
b3_subhdr_iid_2
0
0
b3_subhdr_iid_3
0
0
b3_subhdr_iid_4
0
0
b3_subhdr_iid_5
0
0
b3_subheader
7396774
440043
0
b3b
0
0
b3b_iid
0
0
bat_control
0
0
bat_id
0
0
bat_info
0
0
branch
0
0
canct_off
323
323
323
carrier
0
0
claim_log
0
0
claim_log_iid
0
0
client
0
0
client_iid
0
0
client_invoice
0
0
company
0
0
company_iid
0
0
contact_type
0
0
ctry_code
622
622
623
currency_code
0
0
documents_list
0
0
fldidtbl_datatypes
0
0
gst_rate_code
0
0
hs_duty_rate
0
0
hs_uom
0
0
insight_pdq
0
0
ip_b3b
0
0
ip_b3b_iid
0
0
ip_cci
0
0
ip_cci_iid
0
0
ip_cci_line
0
0
ip_cci_line_iid
0
0
ip_ccn
0
0
ip_ccn_iid
0
0
ip_rmd
0
0
ip_rmd_iid
0
0
lii_account
0
0
lii_client
0
0
lii_contact
0
0
loaddia_16
0
0
loaddia_29
0
0
loadvio_16
0
0
loadvio_29
0
0
privilege
0
0
product
0
0
product_used
0
0
reporterr
0
0
rpt_b3
0
0
rpt_b3_line
0
0
rpt_b3_subheader
0
0
search_criteria
0
0
securgroup
0
0
securuser
0
0
securuser_iid
0
0
services
0
0
srch_crit_batch
0
0
state_model
0
0
status_history
0
0
status_history_dia
0
0
status_history_iid
0
0
status_history_vio
0
0
stringtable
37
38
38
sysaggregates
6
6
6
sysams
6
6
6
sysattrtypes
2
0
2
sysblobs
6
6
6
syscasts
342
342
342
syschecks
0
0
syscolattribs
0
0
syscolauth
40
40
40
syscoldepend
738
738
738
syscolumns
1541
1539
1539
sysconstraints
842
842
842
sysdefaults
0
0
sysdepend
39
39
39
sysdirectives
0
0
sysdistrib
303
252
0
sysdomains
0
0
syserrors
0
0
sysfragauth
0
0
sysfragments
163
263
163
sysindexes
0
0
sysindices
269
269
269
sysinherits
0
0
syslangauth
0
0
syslogmap
0
0
sysobjstate
1066
1066
1066
sysopclasses
2
2
2
sysopclstr
0
sysprocauth
457
457
457
sysprocbody
3857
3857
3857
sysproccolumns
1566
1566
1566
sysprocedures
472
472
472
sysprocplan
617
622
602
sysreferences
16
16
16
sysroleauth
0
0
sysroutinelangs
5
5
5
sysseclabelauth
0
0
sysseclabelcomponentelements
0
0
sysseclabelcomponents
0
0
sysseclabelnames
0
0
sysseclabels
0
0
syssecpolicies
0
0
syssecpolicycomponents
0
0
syssecpolicyexemptions
0
0
syssequences
0
0
syssurrogateauth
0
0
syssynonyms
0
0
syssyntable
0
0
systabamdata
0
0
systabauth
203
202
202
systables
199
198
198
systraceclasses
0
0
systracemsgs
0
0
systrigbody
462
462
462
systriggers
61
61
61
sysusers
3
3
3
sysviews
165
165
165
sysviolations
0
0
sysxadatasources
0
0
sysxasourcetypes
0
0
sysxtddesc
0
0
sysxtdtypeauth
0
0
sysxtdtypes
21
20
21
tariff
0
0
tariff_code
0
0
tariff_dia
0
0
tariff_treatment
0
0
tariff_vio
0
0
terr
0
0
tipsysflds
0
0
transp_mode
7
7
7
tservice
0
0
user_locus_xref
0
0
usergroup
0
0
userlocusxref_iid
0
0
usport_exit
447
447
437
vw_claim_type
vw_item_type
vw_search_criteria
vw_tarifftrtmnt
vw_user_cltacct
Prepare chunk files on OS level for Database Server(ardb)
….
#touch /ach_dat1/ach_dat1.80
…..
#touch /ach_dat2/ach_dat2.70
….
#/usr/bin/chown -R informix:informix /ach_dat1
#/usr/bin/chmod R 660 /ach_dat1
(reference script: /archbkup/bin/cgi.mkchunk)
Add chunk files to dbspaces on Database Server(ardb)
Start DB Server(ardb), #. /home/informix ardb; oninit
#onspaces -a datadbs1 -p /ach_dat1/ach_dat1.80 -o 0 -s 1000000
….
#onspaces -a datadbs2 -p /ach_dat1/ach_dat2.70 -o 0 -s 1000000
(reference script: /archbkup/bin/cgi.addchunk)
Purge Archive DB(ip_arch01) older than 7 years
#/archbkup/b3_archive/b3_arch_purge.sh
#-------------------------------------------------
# Insight keeps only 7 years b3 data,
# purge data older than 7 years in archive db
#-------------------------------------------------
set -v
set -x
##input clear archive database here
database=ip_arch
diretory=/archbkup/dbbkup
log=$diretory/$database.purge.log
. /home/informix/ids115.env ardb
##processing start
[[ $INFORMIXSERVER == "ipdb" ]] && exit 1
[[ $database == "ip_0p" ]] && exit 2
onstat -d >$log
# dbexport -ss $database -o $diretory >>$log
#comment out, tables being referenced cannot be truncated
# dbaccess $database <<EOF 1>>$log 2>&1
# TRUNCATE b3_line_comment;
# TRUNCATE b3_recap_details;
# TRUNCATE b3_line;
# TRUNCATE b3_subheader;
# TRUNCATE b3b;
# TRUNCATE status_history;
# TRUNCATE b3;
# EOF
dbaccess $database <<EOF 1>>$log 2>&1
DATABASE $database;
DROP TABLE $database@ardb:informix.b3_subheader CASCADE;
DROP TABLE $database@ardb:informix.b3_recap_details CASCADE;
DROP TABLE $database@ardb:informix.b3_line_comment CASCADE;
DROP TABLE $database@ardb:informix.b3_line CASCADE;
DROP TABLE $database@ardb:informix.b3 CASCADE;
CLOSE DATABASE;
EOF
# echo "update statistics"|dbaccess $database >>$$log
##import db again if needed
# echo "rename database $database to $database_replaced"|dbaccess
sysmaster
# dbimport -i $diretory -d datadbs1 $database
onstat -d >>$log
exit 0
using $oncheck -cc and-pc: Check system catalog tables of all databases, make sure the tables status after
you do the table maintenance, some tables maybe corrupted due to constrains between them.. Restore
database (dbimport) from dbexport if anything wrong…
Create Database (ip_arch05)
The following statement defines the ip_arch05 database in the datadbs1 dbspace - the default dbspace this
database’s schema (tables/indexs…) will be created on:
CREATE DATABASE ip_arch05 IN datadbs1 WITH LOG;
Element
Description
Restrictions
Syntax
database
ip_arch05
Name that you declare here for the new
database that you are creating
Must be unique among names
of databases of the database
server(ardb)
Database
Name
dbspace
datadbs1
The dbspace to store the data for this
database; default is the root dbspace
Must already exist
Identifier
Re-creating the schema of new created database(ip_arch05)
You can use dbschema and DB-Access to save the schema from a database(ip_arch04) and then re-create
the schema in another database(ip_arch05). A dbschema output file can contain the statements for creating
an entire database.
To save a database schema and re-create the database:
1. Use dbschema to save the schema to an output file, such as ip_arch04.sql:
dbschema -d ip_arch04 > ip_arch04.sql
Here, You need also use the -ss option to generate server-specific information, which will tell the
exact dbspaces where all these tables/indexs will reside:
dbschema -d ip_arch04 -ss > ip_arch04.sql
2. Copy ip_arch04.sql as ip_arch05.sql, Remove the header information about dbschema, if any, from
the output file.
Add a CREATE DATABASE statement at the beginning of the output file or use DB-Access to create
a new database.
CREATE DATABASE ip_arch05 IN datadbs1 with log;
CONNECT TO 'ip_arch05@ardb' USER 'informix' USING 'infxrmvb';
If you create database without log(which is the default), one quick way to turn on log mode of
database: ontape -s -L 0 -B ip_arch05 -t /dev/null
modify following lines(likely Not needed???)
>
2972c2976
< WHERE b3iid = s_b3iid and approveddate like '2010/05/%';
---
> WHERE b3iid = s_b3iid and approveddate like '2014/11/%';
4869a4874
3. Use DB-Access to re-create the schema in a new database:
#su - informix
$dbaccess - ip_arch05.sql
When you use ip_arch05.sql to create a database on a different database server, confirm that
dbspaces exist.
The databases ip_arch04 and ip_arch05 differ in name but have the same schema.
The archived tables
b3
b3_line
b3_line_comment
b3_recap_details
b3_subheader
When create new archive database, say ip_arch05@ardb, we need to synchronize following tables with
production database ip_0p@ipdb.
canct_off
ctry_code
stringtable
transp_mode
usport_exit
Example: The SQL we run is like:
echo “insert into ip_arch05@ardb:informix.usport_exit select * from ip_0p@ipdb:informix.usport_exit
| dbaccess ip_arch05
Switch Archive Database, when 1.5 year limits hit
root@ifx01:/archbkup/bin># cd /insight/local/b3_arch
root@ifx01:/insight/local/b3_arch># cat autoArchive.ksh
#!/bin/ksh
######################################################################
#Archive B3 data from Production instance to performance instance #
#Purge will be done manually after checking the Archive
#Author : bob chong #
#Date : Sept 20, 2000 #
######################################################################
umask 0000
INFORMIXSERVER="ardb"
INFORMIXDIR="/usr/apps/inf/ver115UC3"
GL_DATETIME="%iY/%m/%d %H:%M:%S"
PATH=$INFORMIXDIR/bin:$PATH
export INFORMIXDIR INFORMIXSERVER PATH GL_DATETIME
local_dir=/insight/local/b3_arch
week_no=`date +%w`
day_no=`date +%d`
cd $local_dir
#$INFORMIXDIR/bin/dbaccess ip_arch@ardb < ${local_dir}/startarchive.sql
#New Added on 2008/11/17;
#$INFORMIXDIR/bin/dbaccess ip_arch01@ardb < ${local_dir}/startarchive.sql
#New Added on 2010/05/20;
#$INFORMIXDIR/bin/dbaccess ip_arch02@ardb < ${local_dir}/startarchive.sql
#New Added on 2011/11/17;
#$INFORMIXDIR/bin/dbaccess ip_arch03@ardb < ${local_dir}/startarchive.sql
#New Added on 2013/05/17;
#$INFORMIXDIR/bin/dbaccess ip_arch04@ardb < ${local_dir}/startarchive.sql
#New Added on 2014/10/17;
$INFORMIXDIR/bin/dbaccess ip_arch05@ardb < ${local_dir}/startarchive.sql
exit 0
root@ifx01:/insight/local/b3_arch># cat startarchive.sql
execute procedure archiveandpurge();
Please remember, Archive B3 and purge B3 are automatical now, we need to modify these scripts under
/archbkup/bin/(cgi.archiveB3, cgi.purgeB3), in these two scripts, we need to change archive database name to
new one. ( here for example: change from ip_arch04 to ip_arch05)
#!/bin/ksh
################################################################################
#
# Name: cgi.archiveB3
#
# Reference: n/a
#
# Description: 1. Start Archive DB @ardb
# 2. Backup B3 data 1.5 year ago from @ipdb to @ardb
# 3. Backup @ardb
# 4. Stop @ardb
#
# Command: cgi.archiveB3
#
# Modification History:
#
# Date Name Description
# ------------------------------------------------------------------
# 2014-01-03 Liru Chen
#
####################################################################################
set -v
set -x
#if [[ $# -lt 3 ]]; then
# echo " USAGE: cgi.archiveB3 <'targetString'>"
# exit 2
#fi
integer year=`date +%Y`
integer month=`date +%m`
if [[ $month -le 6 ]]
then
integer purgeyear=year-2
integer purgemonth=month+6
purgedate=${purgeyear}/0${purgemonth}%
if [[ purgemonth -ge 10 ]]
then
purgedate=${purgeyear}/${purgemonth}%
fi
else
integer purgeyear=year-1
integer purgemonth=month-6
purgedate=${purgeyear}/0${purgemonth}%
fi
. /home/informix/ids115.env ipdb
integer records=`echo \
"select count(*) from b3 where approveddate like '$purgedate'" \
| dbaccess ip_0p \
| grep -v count `
echo $records
[[ $records -eq 0 ]] && {
mail -s "B3 already Purged! " lchen@livingstonintl.com < /dev/null
exit 1
}
#Check B3 Data Backup
[[ -f /archbkup/etc/arch.done.year${purgeyear}month${purgemonth} ]] && {
mail -s "B3 Archive Done! " lchen@livingstonintl.com < /dev/null
exit 2
}
#start Archive DB
/archbkup/bin/cgi.startarch
. /home/informix/ids115.env ardb
integer archrecords=`echo \
"select count(*) from b3 where approveddate like '$purgedate'"\
| dbaccess ip_arch05 \
| grep -v count `
echo $archrecords
[[ $archrecords -ne 0 ]] && exit 1
#start Archive B3 from @ipdb to @ardb
nohup /insight/local/b3_arch/run_autoarchive.ksh >> /dmqjtmp/archiveB3Log/monthly_archive.log
2>&1
. /home/informix/ids115.env ardb
integer archrecords=`echo \
"select count(*) from b3 where approveddate like '$purgedate'"\
| dbaccess ip_arch05 \
| grep -v count `
echo $archrecords
[[ $records -ne $archrecords ]] && {
mail -s " Archive B3 Records Error" lchen@livingstonintl.com < /dev/null
exit 2
}
errorrecords=`echo "select * from reporterr" | dbaccess ip_arch05|grep -v tablename`
[[ $errorrecords != "" ]] && {
mail -s "Error in reporterr" lchen@livingstonintl.com < /dev/null
exit 2
}
#start backup Archive DB
/archbkup/bin/cgi.archbackup
#Check Storage usage and Stop Archive DB
/archbkup/bin/cgi.stoparch
touch /archbkup/etc/arch.done.year${purgeyear}month${purgemonth}
exit 0
#!/bin/ksh
################################################################################
#
# Name: cgi.purgeB3
#
# Reference: n/a
#
# Description: 1. Check B3 data backup and Archive DB backup is done
# 2. Purge B3 data 1.5 year ago from @ipdb
#
# Command: cgi.purgeB3
#
# Modification History:
#
# Date Name Description
# ----------------------------------------------------------------
# 2014-01-03 Liru Chen
#
####################################################################################
set -v
set -x
#if [[ $# -lt 3 ]]; then
# echo " USAGE: cgi.archiveB3 <'targetString'>"
# exit 2
#fi
integer year=`date +%Y`
integer month=`date +%m`
if [[ $month -le 6 ]]
then
integer purgeyear=year-2
integer purgemonth=month+6
purgedate=${purgeyear}/0${purgemonth}%
if [[ purgemonth -ge 10 ]]
then
purgedate=${purgeyear}/${purgemonth}%
fi
else
integer purgeyear=year-1
integer purgemonth=month-6
purgedate=${purgeyear}/0${purgemonth}%
fi
storageusage=/archbkup/etc/ipdb_storage.`date +%Y%m`
#Check B3 Backup Status
[[ -f /archbkup/etc/arch.done.year${purgeyear}month${purgemonth} ]] || {
mail -s "Backup B3 first before Purge! " lchen@livingstonintl.com < /dev/null
exit 2
}
#start Archive DB
/archbkup/bin/cgi.startarch
. /home/informix/ids115.env ardb
integer archrecords=`echo \
"select count(*) from b3 where approveddate like '$purgedate'" \
| dbaccess ip_arch05 \
| grep -v count `
echo $archrecords
#Stop Archive DB
/archbkup/bin/cgi.stoparch
. /home/informix/ids115.env ipdb
integer records=`echo \
"select count(*) from b3 where approveddate like '$purgedate'" \
| dbaccess ip_0p \
| grep -v count `
echo $records
[[ $records -eq 0 ]] && {
mail -s "No B3 records in B3 ProdDB! " lchen@livingstonintl.com < /dev/null
exit 1
}
[[ $archrecords -ne $records ]] && {
mail -s "arch&prod B3 mismatch" lchen@livingstonintl.com < /dev/null
exit 1
}
#start Purge B3 from @ipdb
ps -ef|grep -v grep | grep ./tcl
[[ $? -eq 0 ]] && {
mail -s " Data loading process running!" lchen@livingstonintl.com < /dev/null
exit 1
}
#Check B3 Purge Status
[[ -f /archbkup/etc/purge.done.year${purgeyear}month${purgemonth} ]] && {
mail -s "B3 Purge already done! " lchen@livingstonintl.com < /dev/null
exit 2
}
cd /usr/apps/inf/bob/delb3
nohup ./deleteb3.ksh > ./deleteb3.out 2>&1
integer deletedrecords=`cd /usr/apps/inf/bob/delb3; \
grep ":s_b3iid value=" deleteb3_1.trc | wc -l`
echo $deletedrecords
[[ $records -ne $deletedrecords ]] && {
mail -s " Purge B3 Records Error" lchen@livingstonintl.com < /dev/null
# exit 2
}
#Clean the RUNNER log
cd /dmqjtmp/archiveRunnerLog
cp -p runner.10.out /recyclebox; cat /dev/null > runner.10.out
cp -p runner.71.out /recyclebox; cat /dev/null > runner.71.out
cp -p runner.all.out /recyclebox; cat /dev/null > runner.all.out
#Check storage usage
/archbkup/bin/cgi.checkDBSpace > $storageusage
mail -s "Purge Done Storage Usage" lchen@livingstonintl.com < $storageusage
touch /archbkup/etc/purge.done.year${purgeyear}month${purgemonth}
exit 0
Archive and purge steps
Archive production database server (ipdb) to archive database server (ardb)
Start at :9:00AM; 3 hours and 21 minutes (50mins )
Backup archive database server to tapes(ardb)
- Start at 13:00PM; 4 hours (20mins)
Purge the archived data from production database server (ipdb)
- Start at 17:00PM; 8 hours(2hours 45 mins)
B3 data purged
rows purged in B3
Scheduled archive date
2011/02
2012/08/23
2011/03
294448
2012/09/20
2011/04
275047
2012/10/24
2011/05
288954
2012/11/15
2011/06
292100
2012/12/20
2011/07
275667
2013/01/24
2011/08
287424
2013/02/14
2011/09
279420
2013/03/21
2011/10
289129(289127 purged)
2013/04/18
Started from 2013-05-03 (database: ip_arch04)
B3 Data purged
rows purged in B3
Scheduled archive date
2011/11
292330(292312 purged)
2013/05/03
2011/12
257575(257575 purged)
2013/06/20
2012/01
266132(266132 purged)
2013/07/24
2012/02
270914(270911 purged)
2013/08/15
2012/03
296134(296133 purged)
2013/09/20
2012/04
283086(283084 purged)
2013/10/12
2012/05
302629(302629 purged)
2013/11/02
2012/06
288729(288729 purged)
2013/12/01
2012/07
284623(284619 purged)
2014/01/03
2012/08
296854(296854 purged)
2014/02/01
2012/09
270117(270117 purged)
2014/03/01
2012/10
301103(301103 purged)
2014/04/05
2012/11
289758(289758 purged)
2014/05/02
2012/12
265453(265458 purged)
2014/06/01
2013/01
265506(265506 purged)
2014/07/06
2013/02
259348(259348 purged)
2014/08/03
2013/03
259396(259396 purged)
2014/09/02
2013/04
276569(276569 purged)
2014/10/02
Started from 2014-11-02 (database: ip_arch05)
B3 Data purged
rows purged in B3
Scheduled
archive date
2013/05
271198(271198 purged)
2014/11/02
2013/06
258618(258618 purged)
2014/12/07
2013/07
308295(308295 purged)
2015/01/04
2013/08
312092(312092 purged)
2015/02/01
2013/09
309710(309710 purged)
2015/03/01
2013/10
325726(325726 purged)
2015/04/02
2013/11
311641(311641 purged)
2015/05/02
2013/12
279259(279259 purged)
2015/06/07
2014/01
298792(298792 purged)
2015/07/03
2014/02
288473(288473 purged)
2015/08/01
2014/03
2015/09/01
2014/04
2015/10/05
2014/05
376557(376557 purged)
2015/11/01
2014/06
325662(325662 purged)
b3: 5970874|5647780
b3_line: 47513322|45235522
b3__recap_details:82869346|78745001
2015/12/06
2014/07
332722 (332722 purged)
b3: 5906827|5575119
b3_line: 47225344|44992247
b3__recap_details:
82144449|77971195
2016/01/03
2014/08
310173(310173 purged)
2016/02/03
2014/09
320541(320541 purged)
2016/03/02
2014/10
337168(337168 purged)
2016/04/02
Started from 2016-05-01(database: ip_arch06)
B3 Data purged
rows purged in B3
Scheduled archive
date
2014/11
314924(314924 purged)
2016/05/01
2014/12
313754(313754 purged)
2016/06/05
2015/01
307766(307766 purged)
2016/07/03
2015/02
267037(267037 purged)
2016/08/07
2015/03
294964(294964 purged)
2016/09/01
2015/04
272637(272637 purged)
2016/10/02
2015/05
268664(268664 purged)
2016/11/02
2015/06
280754(280754 purged)
2016/12/07
2015/07
277913(277913 purged)
2017/01/01
2015/08
264209(264209 purged)
2017/02/01
2015/09
272365(272365 purged)
2017/03/01
2015/10
285070(285070 purged)
2017/04/05
2015/11
266299(266299 purged)
2017/05/01
2015/12
266154(266154 purged)
2017/06/06
2016/01
248399(248399 purged)
2017/07/03
2016/02
258372(258372 purged)
2017/08/03
2016/03
2017/09/02
2016/04
2017/10/02
Note: the rows purged monthly in Table b3 on production database ip_0p@ipdb, say we
will purge b3 data of 2011/03:
echo “select count(*) from b3 where approveddate like ‘2011/03%’ “ | dbaccess ip_0p
Archive data from production database server (ipdb) to archive database server (ardb)
Connect to IDS database server ipdb, that is to initiate a session from a client application to IDS database
server
Login as USER Informix, set up informix application running environment varibles
$ . ./ids115.env ipdb
Server ipdb environment ...
$ onstat -
IBM Informix Dynamic Server Version 11.50.UC3W2 -- On-Line -- Up 232 days 18:01:16 -- 2051200 Kbytes
$ dbaccess
----------------------- ip_0p@ipdb --------- Press CTRL-W for Help --------
select count(*) from b3
where approveddate like '2011/03%'
(count(*))
294448
or
$ echo “select count(*) from b3 where approveddate like ‘2011/03%’ “ | dbaccess ip_0p
Database selected.
(count(*))
288729
1 row(s) retrieved.
Database closed.
Start archive database server
$ . ./ids115.env ardb
Server ardb environment ...
$ onstat -
shared memory not initialized for INFORMIXSERVER 'ardb'
$ echo $INFORMIXSERVER
ardb
$ oninit
WARNING : If you intend to use J/Foundation or GLS for Unicode feature(GLU) with this Server instance, please make
sure that your SHMBASE value specifies in onconfig is 0x40000000L or above. Otherwise you will have problems
while attaching or dynamically adding virtual shared memory segments. Please refer to Server machine notes for
more information.
$ onstat -
IBM Informix Dynamic Server Version 11.50.UC3W2 -- On-Line -- Up 00:00:19 -- 929856 Kbytes
Run autoarchive.ksh scripts
Jobs
Scripts name
scheduled running
time
logs
Email
Distribution
Database
archive
/insight/local/b3_arch
/run_autoarchive.ksh
19:57PM, Normally the
third Thursday every
month
/dmqjtmp/archiveB3Log/mont
hly_archive.log
AIXSupport@livingst
onintl.com
This script can be run manually, or run by crontab. Just uncomment the line in root
crontab, which schedules the job at 9:00AM. (if you want to start the archive process
by system at scheduled time, say after work hour, just change the scheduled time to
‘0 23 * * *’).
# crontab e
#0 9 * * * /insight/local/b3_arch/run_autoarchive.ksh >> /dmqjtmp/archiveB3Log/monthly_archive.log 2>&1
Remember to comment this line immediately after this script process started, just in
case you may forget it.
Confirm the right data is archived on ip_arch04@ardb
You will get an automatic email as “Monthly B3 Archive Done @ Thu Nov 15
11:15:17 EST 2012after the archive completed.
Login as USER Informix
$ . ./ids115.env ardb
Server ardb environment ...
$ echo $INFORMIXSERVER
ardb
$ onstat -
IBM Informix Dynamic Server Version 11.50.UC3W2 -- On-Line -- Up 00:00:19 -- 929856 Kbytes
$ dbaccess
----------------------- ip_arch04@ardb --------- Press CTRL-W for Help --------
select count(*) from b3
where approveddate like '2011/03%'
(count(*))
294448
or
$ echo “select count(*) from b3 where approveddate like ‘2011/03%’ “ |dbaccess
ip_arch04
Check REPORTERR table to see any errors when you archive.
Select * from reporterr
currentday tablename mode keyvalue sqlerr isamerr
No rows found.
$ echo "select * from reporterr" | dbaccess ip_arch04
Start from 2013-05-03, we switch informix server ardb’s database from ip_arch03 to ip_arch04, modify
script: /insight/local/b3_arch/autoArchive.ksh
Start from 2014-11-02, we switch informix server ardb’s database from ip_arch04 to ip_arch05, modify
script: /insight/local/b3_arch/autoArchive.ksh
root@ifx01:/insight/local/b3_arch #cat autoArchive.ksh
#!/bin/ksh
######################################################################
#Archive B3 data from Production instance to performance instance #
#Purge will be done manually after checking the Archive
#Author : bob chong #
#Date : Sept 20, 2000 #
######################################################################
umask 0000
INFORMIXSERVER="ardb"
INFORMIXDIR="/usr/apps/inf/ver115UC3"
GL_DATETIME="%iY/%m/%d %H:%M:%S"
PATH=$INFORMIXDIR/bin:$PATH
export INFORMIXDIR INFORMIXSERVER PATH GL_DATETIME
local_dir=/insight/local/b3_arch
week_no=`date +%w`
day_no=`date +%d`
cd $local_dir
#$INFORMIXDIR/bin/dbaccess ip_arch@ardb < ${local_dir}/startarchive.sql
#New Added on 2008/11/17;
#$INFORMIXDIR/bin/dbaccess ip_arch01@ardb < ${local_dir}/startarchive.sql
#New Added on 2010/05/20;
#$INFORMIXDIR/bin/dbaccess ip_arch02@ardb < ${local_dir}/startarchive.sql
#New Added on 2011/11/17;
#$INFORMIXDIR/bin/dbaccess ip_arch03@ardb < ${local_dir}/startarchive.sql
#New Added on 2013/05/17;
#$INFORMIXDIR/bin/dbaccess ip_arch04@ardb < ${local_dir}/startarchive.sql
#New Added on 2014/11/02;
$INFORMIXDIR/bin/dbaccess ip_arch05@ardb < ${local_dir}/startarchive.sql
exit 0
root@ifx01:/insight/local/b3_arch /startarchive.sql
execute procedure archiveandpurge();
Archive procedure is on ardb server’s database ip_arch05@ardb:
CREATE PROCEDURE "informix".archiveandpurge() RETURNING INT, CHAR(20);
--Define Working variables
DEFINE startdate CHAR(20);
DEFINE enddate CHAR(20);
DEFINE archivecount INT;
DEFINE archiveDay DATE;
DEFINE s_b3iid INT;
DEFINE s_approveddate CHAR(20);
LET startdate = EXTEND(current, YEAR TO MONTH) - INTERVAL(1) YEAR TO YEAR - IN
TERVAL(6) MONTH TO MONTH;
LET enddate = EXTEND(current, YEAR TO MONTH) - INTERVAL(1) YEAR TO YEAR - INTE
RVAL(5) MONTH TO MONTH;
LET archiveDay = TODAY;
EXECUTE PROCEDURE insertarch(startdate, enddate);
-- SELECT COUNT(*)
-- INTO archivecount
-- FROM reporterr
-- WHERE currentday = archiveDay;
--
-- IF archivecount = 0 THEN
-- FOREACH
-- EXECUTE PROCEDURE deleteB3(startdate, enddate) INTO s_b3iid, s_approvedda
te
-- RETURN s_b3iid, s_approveddate WITH RESUME;
-- END FOREACH;
-- END IF
END PROCEDURE;
CREATE PROCEDURE "informix".insertarch(startdate CHAR(20),enddate CHAR(20))
-- Declare b3 table columns
DEFINE s_b3iid INT;
DEFINE s_liiclientno INT;
DEFINE s_liiaccountno INT;
DEFINE s_liibrchno INT;
DEFINE s_liirefno INT;
DEFINE s_acctsecurno INT;
DEFINE s_b3type CHAR(2);
DEFINE s_cargcntrlno CHAR(25);
DEFINE s_carriercode CHAR(4);
DEFINE s_createdate CHAR(20);
DEFINE s_custoff CHAR(4);
DEFINE s_k84date CHAR(20);
DEFINE s_modetransp CHAR(2);
DEFINE s_portunlading CHAR(4);
DEFINE s_reldate CHAR(20);
DEFINE s_status INT;
DEFINE s_totb3duty float;
DEFINE s_totb3exctax float;
DEFINE s_totb3gst float;
DEFINE s_totb3sima float;
DEFINE s_totb3vfd float;
DEFINE s_transno INT;
DEFINE s_weight INT;
DEFINE s_purchaseorder1 CHAR(15);
DEFINE s_purchaseorder2 CHAR(15);
DEFINE s_shipvia CHAR(18);
DEFINE s_locationofgoods CHAR(17);
DEFINE s_containerno CHAR(20);
DEFINE s_vendorname CHAR(25);
DEFINE s_vendorstate CHAR(3);
DEFINE s_vendorzip CHAR(10);
DEFINE s_freight float;
DEFINE s_usportexit CHAR(5);
DEFINE s_billoflading CHAR(10);
DEFINE s_cargcntrlqty float;
DEFINE s_approveddate CHAR(20);
DEFINE s_sbrnno CHAR(15);
DEFINE s_ccnqty INT;
DEFINE s_ccinumlines INT;
DEFINE s_invoiceqty INT;
DEFINE s_warehousenum INT;
DEFINE s_entname CHAR(35);
DEFINE s_entaddr1 CHAR(35);
DEFINE s_entaddr2 CHAR(35);
DEFINE s_entaddr3 CHAR(35);
DEFINE s_entaddr4 CHAR(30);
DEFINE s_entpostcd CHAR(9);
--Define Working variables
DEFINE tableName CHAR(25);
DEFINE currentDay DATE;
DEFINE mode CHAR(1);
DEFINE sqlErr INT;
DEFINE isamErr INT;
-- Trap Exception
ON EXCEPTION SET sqlErr, isamErr
CALL reportErr(currentDay,tableName,mode, s_b3iid, sqlErr,isamErr);
END EXCEPTION WITH RESUME;
SET LOCK MODE TO WAIT 60;
LET currentDay = today;
LET tableName = 'B3';
LET mode = 'I';
LET s_b3iid = NULL;
FOREACH WITH HOLD
SELECT b3iid, liiclientno, liiaccountno, liibrchno, liirefno, acctsecurno,
b3type,
cargcntrlno, carriercode, createdate, custoff, k84date, modetransp,
portunlading, reldate, status, totb3duty, totb3exctax, totb3gst,
totb3sima, totb3vfd, transno, weight, purchaseorder1, purchaseorder2,
shipvia, locationofgoods, containerno, vendorname, vendorstate, vendorzip,
freight, usportexit, billoflading, cargcntrlqty, approveddate,
sbrnno, ccnqty, ccinumlines, invoiceqty, warehousenum, entname,
entaddr1, entaddr2, entaddr3, entaddr4, entpostcd
INTO s_b3iid, s_liiclientno, s_liiaccountno, s_liibrchno, s_liirefno, s_acc
tsecurno,
s_b3type, s_cargcntrlno, s_carriercode, s_createdate, s_custoff, s_k84date,
s_modetransp, s_portunlading, s_reldate, s_status, s_totb3duty,
s_totb3exctax, s_totb3gst, s_totb3sima, s_totb3vfd, s_transno, s_weight,
s_purchaseorder1, s_purchaseorder2, s_shipvia, s_locationofgoods, s_contain
erno,
s_vendorname, s_vendorstate, s_vendorzip, s_freight, s_usportexit,
s_billoflading, s_cargcntrlqty, s_approveddate,
s_sbrnno, s_ccnqty, s_ccinumlines, s_invoiceqty, s_warehousenum, s_entname,
s_entaddr1, s_entaddr2, s_entaddr3, s_entaddr4, s_entpostcd
FROM ip_0p@ipdb:b3
WHERE (approveddate >= startdate and approveddate < enddate)
BEGIN
-- Trap Exception
ON EXCEPTION SET sqlErr, isamErr
CALL reportErr(currentDay,tableName,mode, s_b3iid, sqlEr
r,isamErr);
END EXCEPTION WITH RESUME;
insert into b3
values(s_b3iid, s_liiclientno, s_liiaccountno, s_liibrchno, s_liirefno
, s_acctsecurno,
s_b3type, s_cargcntrlno, s_carriercode, s_createdate, s_custoff, s_k84
date,
s_modetransp, s_portunlading, s_reldate, s_status, s_totb3duty,
s_totb3exctax, s_totb3gst, s_totb3sima, s_totb3vfd, s_transno, s_weigh
t,
s_purchaseorder1, s_purchaseorder2, s_shipvia, s_locationofgoods, s_co
ntainerno,
s_vendorname, s_vendorstate, s_vendorzip, s_freight, s_usportexit,
s_billoflading, s_cargcntrlqty, s_approveddate,
s_sbrnno, s_ccnqty, s_ccinumlines, s_invoiceqty, s_warehousenum, s_ent
name,
s_entaddr1, s_entaddr2, s_entaddr3, s_entaddr4, s_entpostcd );
END
END FOREACH;
END PROCEDURE;
CREATE PROCEDURE "informix".insertb3(b3iid_arch INT)
--Define Error Variables
DEFINE sqlErr INT;
DEFINE isamErr INT;
--Define Working variables
DEFINE tableName CHAR(25);
DEFINE currentDay DATE;
DEFINE mode CHAR(1);
-- Trap Exception
ON EXCEPTION SET sqlErr, isamErr
-- CALL reportErr(currentDay,tableName,mode, b3iid_arch, sqlErr,isamErr);
RAISE EXCEPTION sqlErr,isamErr;
END EXCEPTION;
LET currentDay = today;
LET tableName = 'B3_SUB';
LET mode = 'I';
SET LOCK MODE TO WAIT 60;
--insert into b3_subheader table
INSERT INTO b3_subheader
SELECT * FROM ip_0p@ipdb:b3_subheader
WHERE b3iid = b3iid_arch;
END PROCEDURE;
CREATE PROCEDURE "informix".insertb3subheader(b3subiid_arch INT)
--Define Error Variables
DEFINE sqlErr INT;
DEFINE isamErr INT;
--Define Working variables
DEFINE tableName CHAR(25);
DEFINE currentDay DATE;
DEFINE mode CHAR(1);
-- Trap Exception
ON EXCEPTION SET sqlErr, isamErr
-- CALL reportErr(currentDay,tableName,mode,b3subiid_arch, sqlErr,isamErr);
RAISE EXCEPTION sqlErr,isamErr;
END EXCEPTION;
LET currentDay = today;
LET tableName = 'B3_LINE';
LET mode = 'I';
SET LOCK MODE TO WAIT 60;
--Insert into b3_line table
INSERT INTO b3_line
SELECT * FROM ip_0p@ipdb:b3_line
WHERE b3subiid = b3subiid_arch;
END PROCEDURE;
CREATE PROCEDURE "informix".insertb3line(b3lineiid_arch INT)
--Define Error Variables
DEFINE sqlErr INT;
DEFINE isamErr INT;
--Define Working variables
DEFINE tableName CHAR(25);
DEFINE currentDay DATE;
DEFINE mode CHAR(1);
-- Trap Exception
ON EXCEPTION SET sqlErr, isamErr
-- CALL reportErr(currentDay,tableName,mode, b3lineiid_arch, sqlErr,isamErr);
RAISE EXCEPTION sqlErr,isamErr;
END EXCEPTION;
LET currentDay = today;
LET tableName = 'B3_RECAP';
LET mode = 'I';
SET LOCK MODE TO WAIT 60;
--insert into recap_details table
INSERT INTO b3_recap_details
SELECT * FROM ip_0p@ipdb:b3_recap_details
WHERE b3lineiid = b3lineiid_arch;
LET tableName = 'B3_COMMENT';
--insert into b3_line_comment table
INSERT INTO b3_line_comment
SELECT * FROM ip_0p@ipdb:b3_line_comment
WHERE b3lineiid = b3lineiid_arch;
END PROCEDURE;
Purge procedure is on Production Database ip_0p@ipdb
CREATE PROCEDURE "informix".deleteb3_1()
-- Declare working variables
DEFINE startdate CHAR(20) ;
DEFINE enddate CHAR(20) ;
DEFINE s_b3iid INT;
DEFINE tableName CHAR(25);
DEFINE currentDay DATE;
DEFINE mode CHAR(1);
DEFINE sqlErr INT;
DEFINE isamErr INT;
-- Trap Exception
ON EXCEPTION SET sqlErr, isamErr
CALL reportErr(currentDay,tableName,mode, s_b3iid, sqlErr,isamErr);
END EXCEPTION WITH RESUME;
--Define working variables
LET startdate = EXTEND(current, YEAR TO MONTH) - INTERVAL(1) YEAR TO YEAR - INTERVAL(6)
MONTH TO MONTH;
LET enddate = EXTEND(current, YEAR TO MONTH) - INTERVAL(1) YEAR TO YEAR - INTERVAL(5)
MONTH TO MONTH;
LET s_b3iid = 0;
LET tableName = 'B3';
LET currentDay = today;
LET mode = 'D';
LET sqlErr = 0;
LET isamErr = 0;
SET DEBUG FILE TO './deleteb3_1.trc' ;
TRACE ON ;
TRACE "startdate value=" || startdate || "endate value=" || enddate ;
FOREACH WITH HOLD
SELECT b3iid
INTO s_b3iid
FROM b3
WHERE approveddate >= startdate and approveddate < enddate
BEGIN
-- Trap Exception
ON EXCEPTION SET sqlErr, isamErr
CALL reportErr(currentDay,tableName,mode, s_b3iid, sqlErr,isamErr);
END EXCEPTION WITH RESUME;
TRACE "s_b3iid value=" || s_b3iid ;
DELETE FROM b3 WHERE b3iid = s_b3iid ;
END
END FOREACH;
END PROCEDURE;
create procedure "informix".pd_b3(old_b3iid integer)
define errno integer;
define errmsg char(255);
define numrows integer;
-- Delete all children in "b3_subheader"
delete from b3_subheader
where b3iid = old_b3iid;
-- Delete all children in "b3b"
delete from b3b
where b3iid = old_b3iid;
-- Delete all children in "status_history"
delete from status_history
where b3iid = old_b3iid;
-- Delete all children in "containers"
delete from containers
where b3iid = old_b3iid;
end procedure;
create procedure "informix".pd_b3_line(old_b3lineiid integer)
define errno integer;
define errmsg char(255);
define numrows integer;
-- Delete all children in "b3_recap_details"
delete from b3_recap_details
where b3lineiid = old_b3lineiid;
-- Delete all children in "b3_line_comment"
delete from b3_line_comment
where b3lineiid = old_b3lineiid;
end procedure;
create procedure "informix".pd_b3_subheader(old_b3subiid integer)
define errno integer;
define errmsg char(255);
define numrows integer;
-- Delete all children in "b3_line"
delete from b3_line
where b3subiid = old_b3subiid;
end procedure;
With Triggers related all these procedures together:
create trigger "informix".td_b3 delete on "informix".b3 referencing
old as old_del
for each row
(
execute procedure "informix".pd_b3(old_del.b3iid ));
create trigger "informix".insertb3 insert on "informix".b3 referencing
new as post_ins
for each row
(
execute procedure "informix".insertb3(post_ins.b3iid
));
create trigger "informix".tu_b3_subheader update on "informix"
.b3_subheader referencing old as old_upd new as new_upd
for each row
(
execute procedure "informix".pu_b3_subheader(old_upd.b3subiid
,old_upd.b3iid ,new_upd.b3subiid ,new_upd.b3iid ));
create trigger "informix".td_b3_subheader delete on "informix"
.b3_subheader referencing old as old_del
for each row
(
execute procedure "informix".pd_b3_subheader(old_del.b3subiid
));
create trigger "informix".insertb3subheader insert on "informix"
.b3_subheader referencing new as post_ins
for each row
(
execute procedure "informix".insertb3subheader(post_ins.b3subiid
));
create trigger "informix".tu_b3_line update on "informix".b3_line
referencing old as old_upd new as new_upd
for each row
(
execute procedure "informix".pu_b3_line(old_upd.b3lineiid
,old_upd.b3subiid ,new_upd.b3lineiid ,new_upd.b3subiid ));
create trigger "informix".td_b3_line delete on "informix".b3_line
referencing old as old_del
for each row
(
execute procedure "informix".pd_b3_line(old_del.b3lineiid
));
create trigger "informix".insertb3line insert on "informix".b3_line
referencing new as post_ins
for each row
(
execute procedure "informix".insertb3line(post_ins.b3lineiid
));
create trigger "informix".ti_b3_recap_detail insert on "informix"
.b3_recap_details referencing new as new_ins
for each row
(
execute procedure "informix".pi_b3_recap_detail(new_ins.b3lineiid
));
create trigger "informix".tu_b3_recap_detail update on "informix"
.b3_recap_details referencing old as old_upd new as new_upd
for each row
(
execute procedure "informix".pu_b3_recap_detail(old_upd.b3recapiid
,old_upd.b3lineiid ,new_upd.b3recapiid ,new_upd.b3lineiid ));
Backup Archive database server (ardb)
Ask operator to insert Archive Tape Set (A/B/C/D) into ifx01 tape drive
There are 4 archive tape set: A/B/C/D, two tapes for each set.
Archive Tape Set A : tape: ifxarch-A-01 and tape: ifxarch-A-02
Archive Tape Set B : tape: ifxarch-B-01 and tape: ifxarch-B-02
Archive Tape Set C : tape: ifxarch-C-01 and tape: ifxarch-C-02
Archive Tape Set D : tape: ifxarch-D-01 and tape: ifxarch-D-02
Tape set recycle:
Scheduled archive date
Archive Tape Set
2012/08/23
Archive Tape Set B :
tape: ifxarch-B-01 and tape: ifxarch-B-02
2012/09/20
Archive Tape Set C :
tape: ifxarch-C-01 and tape: ifxarch-C-02
2012/10/24
Archive Tape Set D :
tape: ifxarch-D-01 and tape: ifxarch-D-02
2012/11/16
Archive Tape Set A :
tape: ifxarch-A-01 and tape: ifxarch-A-02
2012/12/20
Archive Tape Set B :
tape: ifxarch-B-01 and tape: ifxarch-B-02
2013/01/25
Archive Tape Set C :
tape: ifxarch-C-01 and tape: ifxarch-C-02
2013/02/22
Archive Tape Set D :
tape: ifxarch-D-01 and tape: ifxarch-D-02
2013/03/22
Archive Tape Set A :
tape: ifxarch-A-01 and tape: ifxarch-A-02
2013/04/19
Archive Tape Set B :
tape: ifxarch-B-01 and tape: ifxarch-B-02
Tape set recycle, Start from 2013-05-03
Scheduled archive date
Archive Tape Set
2013/05/17
Archive Tape Set C :
tape: ifxarch-C-01 and tape: ifxarch-C-02
2013/06/24
Archive Tape Set D :
tape: ifxarch-D-01 and tape: ifxarch-D-02
2013/07/16
Archive Tape Set A :
tape: ifxarch-A-01 and tape: ifxarch-A-02
2013/08/20
Archive Tape Set B :
tape: ifxarch-B-01 and tape: ifxarch-B-02
2013/09/25
Archive Tape Set C :
tape: ifxarch-C-01 and tape: ifxarch-C-02
2013/10/12
Archive Tape Set D :
tape: ifxarch-D-01 and tape: ifxarch-D-02
2013/11/1
Archive Tape Set A :
tape: ifxarch-A-01 and tape: ifxarch-A-02
2013/12/19
Archive Tape Set B :
tape: ifxarch-B-01 and tape: ifxarch-B-02
2014/01/25
Archive Tape Set C :
tape: ifxarch-C-01 and tape: ifxarch-C-02
2014/02/22
Archive Tape Set D :
tape: ifxarch-D-01 and tape: ifxarch-D-02
2014/03/22
Archive Tape Set A :
tape: ifxarch-A-01 and tape: ifxarch-A-02
2014/04/19
Archive Tape Set B :
tape: ifxarch-B-01 and tape: ifxarch-B-02
2014/05/25
Archive Tape Set C :
tape: ifxarch-C-01 and tape: ifxarch-C-02
2014/06/22
Archive Tape Set D :
tape: ifxarch-D-01 and tape: ifxarch-D-02
2014/07/22
Archive Tape Set A :
tape: ifxarch-A-01 and tape: ifxarch-A-02
2014/08/19
Archive Tape Set B :
tape: ifxarch-B-01 and tape: ifxarch-B-02
2014/09/25
Archive Tape Set C :
tape: ifxarch-C-01 and tape: ifxarch-C-02
2014/10/22
Archive Tape Set D :
tape: ifxarch-D-01 and tape: ifxarch-D-02
$ cd /home/informix
$ ./ardb_bkup.ksh
Server ardb environment ...
Database is up and running ....
Ready to backup database ...
Please wait until it is completed ...
Please mount tape 1 on /dev/rmt0 and press Return to continue ...
10 percent done.
20 percent done.
30 percent done.
40 percent done.
50 percent done.
60 percent done.
70 percent done.
Tape is full ...
Please label this tape as number 1 in the arc tape sequence.
This tape contains the following logical logs:
31635
Please mount tape 2 on /dev/rmt0 and press Return to continue ...
80 percent done.
90 percent done.
100 percent done.
Please label this tape as number 2 in the arc tape sequence.
Program over.
Backup is completed successfully, please check the mail ...
TIPS: Ask operator to replace Tape with: ifxarch-A/B/C/D-02 when first tape ifxarch-A/B/C/D-01 filled
# cd /archbkup
# /archbkup/cgi.archbackup
Bring ardb instance down
$ onstat -
IBM Informix Dynamic Server Version 11.50.UC3W2 -- On-Line -- Up 1 days 04:48:39 -- 929856 Kbytes
Double confirm that the database server you will shutdown is ardb, NOT ipdb
$ echo $INFORMIXSERVER
ardb
$ onmode ky
Ask operator to send the backup tape set to Iron Mountain
Call 905-890-3210 Ext: 29, or send email to: ComputerOps@livingstonintl.com
Purge B3 data on production database server: ipdb, database: ip_0p
Pick A weekend to purge b3 in production database is strongly recommended
Scheduled archive date
Scheduled purge date
2012/08/23
2012/08/25
2012/09/20
2012/09/22
2012/10/24
2012/10/26
2012/11/22
2012/11/24
2012/12/20
2012/12/22
2013/01/18
2013/01/19
2013/02/22
2013/02/23
2013/03/22
2013/03/23
2013/04/19
2013/04/20
Start from 2013-05-03
Scheduled archive date
Scheduled purge date
2013/05/03
2013/05/17
2013/06/20
2013/06/21
2013/07/22
2013/07/24
2013/08/15
2013/08/23
2013/09/20
2013/09/21
2013/10/12
2013/10/12
2013/11/2
2013/11/2
2013/12/1
2013/12/1
2014/01/18
2014/01/20
2014/02/1
2014/02/1
2014/03/1
2014/03/1
2014/04/5
2014/04/5
2014/05/21
2014/05/23
2014/06/15
2014/06/19
2014/07/20
2014/07/21
2014/08/17
2014/08/19
2014/09/20
2014/09/22
2014/10/02
2014/10/20
Stop all data load program (runner); make sure there is no TCL running.
check if any ./tcl process ?
$ ps -ef | grep ./tcl
lchen 24510468 17039520 2 09:07:12 pts/0 0:00 grep tcl
bbois 39583900 1 43 09:06:09 - 0:01 ./tcl 70 3
Login as root, comment all 6 lines of data load program (runner):
# crontab -e
#########################################################################
# IP Operations Environment
#
#########################################################################
#--> runner
#* 6-7 * * * ksh /insight/local/scripts/runner.10.ksh >>
/dmqjtmp/archiveRunnerLog/runner.10.out 2>&1
#* 8-20 * * * ksh /insight/local/scripts/runner.all.ksh >>
/dmqjtmp/archiveRunnerLog/runner.all.out 2>&1
#20-40 22 * * * ksh /insight/local/scripts/runner.71.ksh >>
/dmqjtmp/archiveRunnerLog/runner.71.out 2>&1
#1,16,31,46 8-20 * * * /insight/local/scripts/iccdataupload/StartInsightUpload.ksh >>
/insight/local/scripts/iccdataupload/StartInsightUpload.out 2>&1
#2,17,32,47 8-20 * * *
/insight/local/scripts/ICCSetExpiryDates/StartInsightSetExpiryDates.ksh >>
/insight/local/scripts/ICCSetExpiryDates/StartInsightSetExpiryDates.out 2>&1
#5 15 * * * /insight/local/scripts/ICCBillingUpload/StartInsightBillingUpload.ksh>>
/insight/local/scripts/ICCBillingUpload/StartInsightBillingUpload.out 2>&1
Wait till there is no tcl’ program running
$ ps ef | grep ./tcl
Thare is no tcl process found
$ onstat -g ses | grep bbois
There is no bbois session found
Start Purge program
cd /usr/apps/inf/bob/delb3
nohup ./deleteb3.ksh > ./deleteb3.out 2>&1 &
[1] 16711752
Purge started
$ onstat -g sql
IBM Informix Dynamic Server Version 11.50.UC3W2 -- On-Line -- Up 222 days 14:11:32 -- 2051200 Kbytes
Sess SQL Current Iso Lock SQL ISAM F.E.
Id Stmt type Database Lvl Mode ERR ERR Vers Explain
794406 DELETE ip_0p CR Not Wait 0 0 9.24 Off
794277 - ip_0p DR Not Wait 0 0 9.22 Off
785756 - ip_0p DR Not Wait 0 0 9.28 Off
785751 - ip_0p DR Not Wait 0 0 9.28 Off
Confirm the purge process completed successfully
8 hours later
onstat -g sql | grep DELETE
No record found
cd /usr/apps/inf/bob/delb3; grep ":s_b3iid value=" deleteb3_1.trc | wc l
294448
Clean the RUNNER log
cd /dmqjtmp/archiveRunnerLog
cp runner.10.out /recyclebox; cat /dev/null > runner.10.out
cp runner.71.out /recyclebox; cat /dev/null > runner.71.out
cp runner.all.out /recyclebox; cat /dev/null > runner.all.out
Uncomment all data load programs (restart runner)
# crontab -e
#########################################################################
# IP Operations Environment
#
#########################################################################
#--> runner
* 6-7 * * * ksh /insight/local/scripts/runner.10.ksh >>
/dmqjtmp/archiveRunnerLog/runner.10.out 2>&1
* 8-20 * * * ksh /insight/local/scripts/runner.all.ksh >>
/dmqjtmp/archiveRunnerLog/runner.all.out 2>&1
20-40 22 * * * ksh /insight/local/scripts/runner.71.ksh >>
/dmqjtmp/archiveRunnerLog/runner.71.out 2>&1
1,16,31,46 8-20 * * * /insight/local/scripts/iccdataupload/StartInsightUpload.ksh >>
/insight/local/scripts/iccdataupload/StartInsightUpload.out 2>&1
2,17,32,47 8-20 * * *
/insight/local/scripts/ICCSetExpiryDates/StartInsightSetExpiryDates.ksh >>
/insight/local/scripts/ICCSetExpiryDates/StartInsightSetExpiryDates.out 2>&1
5 15 * * * /insight/local/scripts/ICCBillingUpload/StartInsightBillingUpload.ksh>>
/insight/local/scripts/ICCBillingUpload/StartInsightBillingUpload.out 2>&1
Archive Storage consideration:
About 800-1000M storage space consumed on archive DB after the monthly B3 archive(1.5G)
Insight (ifx01) DRP procedure
Sunguard System Environment:
Configuration ID: p690 Hotsite 2; LPAR11
Hostname : ifx01
OS level: 5300-08-01-0819
CPU : 2
Memory : 8G
Internal Disk: 144G (4 x 36G)
External Disk: 300G (6 x 50G)
Network: 1 x 1000G
DDS5 DAT72 Tape Drive: 1
Use HMC to connect to LPAR11 as a console.
Insight backup tapes:
IFX01 system April 29th + April 21st
IFX01 App May 2nd + May 3rd
IFX01 DB May 2nd + May 3rd
Step 1: Restore the basic OS (rootvg) via OS backup tape
1. Display and/or change the primary boot device.
To display the primary boot device:
# bootlist -m normal -o
To change the primary boot device to tape drive:
# bootlist -m normal rmt0
2. Power off system by:
# sync; sync; sync; shutdown -F
.
3. Turning on the external devices first is necessary so that the system unit can identify them during the
startup (boot) process. These include:
Terminals
Tape drives
Monitors
External disk drives
4. Power on the system. When booting, a screen will appear (before the one in Figure 1-1) asking you to
press a function key (such as F1) to select the proper display as the system console. Each display on
the system will receive a function key number in order to identify it as the system console
Active LPAR_A6, in Advaned… option, select Boot mode: SMS
For SMS, we have 5 options for NVROM parameters setting, and or operation
1. Select Language, you should always choose English, or just leave it alone
2. Setup Remote IPL, it’s impotant to choose an enternet interface and set ip/routor to access NIM
server
3. change scsi settings if you have lots of scsi cards connetions which may have scsi ID confliction
4. Select Console, always cureent one you work on
5. Select Boot Options
choose: 5 Select Boot options
choose: 1 Select Install/Boot Device
choose: 2 Tape
choose: 6 List All devices
choose: 4 SCSI Tape
choos: 3 Sercei Mode Boot
choose: 1 yes
-----------------------------------------------------------------------------------------------------------------------------
TIPS: If you need diagnose system hardware status right after you load the OS kernel from local hard
disk,
if choose 1, then
then,
You choose 5, then init 2 to start the system OS to multiple user run level.
---------------------------------------------------------------------------------------------------------------------------
TIPS: If Select boot/install from Network/DVD/Tape, which means load OS kernel Not from Hard
Disk, you will face a situation that you may install a new OS,migrate OS, recover OS on to a
selected Hard Disk,
choose 3..Service Mode Boot, if you just want to load OS kernel in Service Mode, which configure
Kernal by current system hardware, and enter a diagnose, maintenance for system recovery, totally new
installation, and/or migration status
Then, choose 3,
then, choose 1,
Loading the AIX Kernal….
Following operation are controlled by OS kernel…
choose 1, have English during install
1. Start install Now with Dedault settings
2. Change/Show Installation Settings and install
3. Start maintenance Mode for System Recovery
4. Chfigure Network Disk (iscsi)
5. Select Storage Adapters
choose 3.. start maintenance Mode for System Recovery
1. Access a Root Volume Group
2. Copy a system Dump to Removable Media
3. Access Advance Maintenace Functions
4. Erase Disks
5. Configure Network Disks (iscsi)
6. Select Storage Adapters
choose 3, Access Advance maintenance Functions
choose 1 Access root vg
choose 15, Volume Group hdisk1
choose 1 Access this volume Group and start a shell, then you can fsck each filesystems on rootvg, and
even change root password on /etc/passwd. or using #passwd directly.
if choose 2 Importing Volume Group… rootvg, checking the / filesystem.
---------------------------------------------------------------------------------------------------------------------------
5. Insert mksysb OS backup tape media into the Tape Drive. The system begins booting from the
installation media. After several minutes, c31 is displayed in the LED (if your system has an LED; A
screen similar to the one in Figure 1-1 is displayed).
Figure 1-1
6. Select option 3, Start Maintenance Mode for System Recovery, and press Enter. A screen similar to the
one in Figure 1-2 is shown.
Figure 1-2
If you want to change root passwd here using ” 1 Access a Root Volume Group”
7. Enter 5, Install from a system backup
Next System Backup Installation and Settings screen specifies disks where you want to install the
backup image. The Change Disk(s) Where You Want to install screen displays. This screen lists all
available disks on which you can install the system backup image. Three greater-than signs (>>>) mark
each selected disk. Type the number and press Enter for each disk you choose. Type the number of a
selected disk to deselect it. You can select more than one disk.
We can select all 4 internal disks (hdisk0 hdisk1 hdisk2 hdisk3) to create rootvg.
8. After you have finished selecting disks, press the Enter key.
9. Type 0 to accept the settings in the System Backup Installation and Settings screen. The Installing Base
Operating System screen displays the rate of completion and duration.
After OS restored from backup tape, the OS should be boot in normal mode.
Step 2: Setup System file systems environment for Application restoration
1. Preparation in /etc/filesystems:
# cp /etc/filesystems /etc/filesystems.backup.20120503
Remove stanza entries of following filesystems in /etc/filesystems:
/ix_root
/ix_plog
/ix_llog
/ix_dat1
/ix_dat2
/ix_dat3
/ix_idx1
/ix_idx2
/ix_idx3
/ix_temp
/usr/apps
/netins
/dmqjtmp
/recyclebox
/ach_root
/ach_plog
/ach_llog
/ach_dat1
/ach_dat2
/ach_idx1
/ach_idx2
/ach_temp
# vi /etc/filesystems
2. Use 6 external disks (300G) to create application and database storage space:
# Create Volume Group dbvg
/usr/sbin/mkvg -s 256 -f -y dbvg hdisk4 hdisk5 hdisk6 hdisk7 hdisk8 hdisk9
# Create Logic Volumes
/usr/sbin/mklv -t jfs2log -y loglv00 dbvg 1
/usr/sbin/mklv -t jfs2 -y ixrootlv dbvg 1
/usr/sbin/mklv -t jfs2 -y ixploglv dbvg 1
/usr/sbin/mklv -t jfs2 -y ixlloglv dbvg 4
/usr/sbin/mklv -t jfs2 -y ixdat1lv dbvg 88
/usr/sbin/mklv -t jfs2 -y ixdat2lv dbvg 100
/usr/sbin/mklv -t jfs2 -y ixdat3lv dbvg 76
/usr/sbin/mklv -t jfs2 -y ixidx1lv dbvg 28
/usr/sbin/mklv -t jfs2 -y ixidx2lv dbvg 20
/usr/sbin/mklv -t jfs2 -y ixidx3lv dbvg 16
/usr/sbin/mklv -t jfs2 -y ixtemplv dbvg 16
/usr/sbin/mklv -t jfs2 -y appslv dbvg 40
/usr/sbin/mklv -t jfs2 -y netinslv dbvg 10
/usr/sbin/mklv -t jfs2 -y dmqjtmplv dbvg 50
/usr/sbin/mklv -t jfs2 -y recyclelv dbvg 40
/usr/sbin/mklv -t jfs2 -y achrootlv dbvg 1
/usr/sbin/mklv -t jfs2 -y achploglv dbvg 1
/usr/sbin/mklv -t jfs2 -y achlloglv dbvg 4
/usr/sbin/mklv -t jfs2 -y achdat1lv dbvg 152
/usr/sbin/mklv -t jfs2 -y achdat2lv dbvg 164
/usr/sbin/mklv -t jfs2 -y achidx1lv dbvg 12
/usr/sbin/mklv -t jfs2 -y achidx2lv dbvg 12
/usr/sbin/mklv -t jfs2 -y achtemplv dbvg 8
# Create FileSystems
/usr/sbin/crfs -v jfs2 -d ixrootlv -m /ix_root -A yes -p rw -a logname=loglv00
/usr/sbin/crfs -v jfs2 -d ixploglv -m /ix_plog -A yes -p rw -a logname=loglv00
/usr/sbin/crfs -v jfs2 -d ixlloglv -m /ix_llog -A yes -p rw -a logname=loglv00
/usr/sbin/crfs -v jfs2 -d ixdat1lv -m /ix_dat1 -A yes -p rw -a logname=loglv00
/usr/sbin/crfs -v jfs2 -d ixdat2lv -m /ix_dat2 -A yes -p rw -a logname=loglv00
/usr/sbin/crfs -v jfs2 -d ixdat3lv -m /ix_dat3 -A yes -p rw -a logname=loglv00
/usr/sbin/crfs -v jfs2 -d ixidx1lv -m /ix_idx1 -A yes -p rw -a logname=loglv00
/usr/sbin/crfs -v jfs2 -d ixidx2lv -m /ix_idx2 -A yes -p rw -a logname=loglv00
/usr/sbin/crfs -v jfs2 -d ixidx3lv -m /ix_idx3 -A yes -p rw -a logname=loglv00
/usr/sbin/crfs -v jfs2 -d ixtemplv -m /ix_temp -A yes -p rw -a logname=loglv00
/usr/sbin/crfs -v jfs2 -d appslv -m /usr/apps -A yes -p rw -a logname=loglv00
/usr/sbin/crfs -v jfs2 -d netinslv -m /netins -A yes -p rw -a logname=loglv00
/usr/sbin/crfs -v jfs2 -d dmqjtmplv -m /dmqjtmp -A yes -p rw -a logname=loglv00
/usr/sbin/crfs -v jfs2 -d recyclelv -m /recyclebox -A yes -p rw -a logname=loglv00
/usr/sbin/crfs -v jfs2 -d achrootlv -m /ach_root -A yes -p rw -a logname=loglv00
/usr/sbin/crfs -v jfs2 -d achploglv -m /ach_plog -A yes -p rw -a logname=loglv00
/usr/sbin/crfs -v jfs2 -d achlloglv -m /ach_llog -A yes -p rw -a logname=loglv00
/usr/sbin/crfs -v jfs2 -d achdat1lv -m /ach_dat1 -A yes -p rw -a logname=loglv00
/usr/sbin/crfs -v jfs2 -d achdat2lv -m /ach_dat2 -A yes -p rw -a logname=loglv00
/usr/sbin/crfs -v jfs2 -d achidx1lv -m /ach_idx1 -A yes -p rw -a logname=loglv00
/usr/sbin/crfs -v jfs2 -d achidx2lv -m /ach_idx2 -A yes -p rw -a logname=loglv00
/usr/sbin/crfs -v jfs2 -d achtemplv -m /ach_temp -A yes -p rw -a logname=loglv00
# Mount all these filesystems
/usr/sbin/mount all
Step 3: Restore Application
To restore the backups from a single-volume, multiple-backup tape, for example:
# restore -xvqs 5 f /dev/rmt0.1
# restore -xvqs 4 f /dev/rmt0.1
The first command extracts all files from the fifth archive on the multiple-backup tape specified by /dev/rmt0.1.
The .1 designator specifies the tape device will not be retensioned when it is opened and that it will not be
rewound when it is closed. It is necessary to use a no-rewind-on-close, no-retension-on-open tape device
because of the behavior of the -s flag. The second command extracts all the files from the fourth archive (relative
to the current location of the tape head on the tape). After the fifth archive has been restored, the tape
read/write head is in a position to read the archive. Since you want to extract the ninth archive on the tape, you
must specify a value of 4 with the -s flag. This is because the -s flag is relative to your position on the tape and
not to an archives position on the tape. The ninth archive is the fourth archive from your current position on the
tape.
The Application file systems backup sequence:
filesystem: / File Archive number: 1
filesystem: /home File Archive number: 2
filesystem: /usr File Archive number: 3
filesystem: /var File Archive number: 4
filesystem: /tmp File Archive number: 5
filesystem: /opt File Archive number: 6
filesystem: /ibm File Archive number: 7
filesystem: /netins File Archive number: 8
filesystem: /dmqjtmp File Archive number: 9
filesystem: /recyclebox File Archive number: 10
filesystem: /usr/apps File Archive number: 11
filesystem: /insight File Archive number: 12
filesystem: /var/adm/ras/livedump File Archive number: 13
filesystem: /admin File Archive number: 14
We need to restore file systems: /ibm; /netins; /dmqjtmp; /recyclebox; /usr/apps; The other file systems are
restored by OS restore process (They are in rootvg).
1. Insert APP backup tape media into the Tape Drive.
# tctl f /dev/rmt0 rewind
2. To restore /ibm file system, Change to a directory that will be used to restore the files
# cd /ibm
# restore -xvqs 7 f /dev/rmt0.1
You have not read any media yet.
Unless you know which volume your file or files are on, you should start with the last volume and
work towards the first volume.
Specify the next volume number: 1
[ Type the volume number and press Return. If you have only one volume, type 1 and press
Return ]
Do you want to set the owner or the mode for the current directory? [ yes or no ] no
[ To keep the mode of the current directory unchanged, enter no at the set owner/mode
prompt ]
3. Then, to restore /netins file system:
# cd /netins
# restore -xvqs 1 f /dev/rmt0.1
You have not read any media yet.
Unless you know which volume your file or files are on, you should start with the last volume and
work towards the first volume.
Specify the next volume number: 1
Do you want to set the owner or the mode for the current directory? [ yes or no ] no
4. Then, to restore /dmqjtmp file system:
# cd /dmqjtmp
# restore -xvqs 1 f /dev/rmt0.1
You have not read any media yet.
Unless you know which volume your file or files are on, you should start with the last volume and
work towards the first volume.
Specify the next volume number: 1
Do you want to set the owner or the mode for the current directory? [ yes or no ] yes
5. Then, to restore /recyclebox file system:
# cd /recyclebox
# restore -xvqs 1 f /dev/rmt0.1
You have not read any media yet.
Unless you know which volume your file or files are on, you should start with the last volume and
work towards the first volume.
Specify the next volume number: 1
Do you want to set the owner or the mode for the current directory? [ yes or no ] yes
6. Then, to restore /usr/apps file system:
# cd /usr/apps
# restore -xvqs 1 f /dev/rmt0.1
You have not read any media yet.
Unless you know which volume your file or files are on, you should start with the last volume and
work towards the first volume.
Specify the next volume number: 1
Do you want to set the owner or the mode for the current directory? [ yes or no ] yes
7. Eject the tape from tape drive:
# tctl f /dev/rmt0.1 offline
Step 4: Setup Informix database Restore Environment
# Create Database Storage files (chunks) for informix dbspace
cd /ix_dat1
/usr/bin/touch ix_dat1.1
/usr/bin/touch ix_dat1.10
/usr/bin/touch ix_dat1.11
/usr/bin/touch ix_dat1.12
/usr/bin/touch ix_dat1.13
/usr/bin/touch ix_dat1.14
/usr/bin/touch ix_dat1.15
/usr/bin/touch ix_dat1.16
/usr/bin/touch ix_dat1.17
/usr/bin/touch ix_dat1.18
/usr/bin/touch ix_dat1.19
/usr/bin/touch ix_dat1.2
/usr/bin/touch ix_dat1.20
/usr/bin/touch ix_dat1.21
/usr/bin/touch ix_dat1.22
/usr/bin/touch ix_dat1.3
/usr/bin/touch ix_dat1.4
/usr/bin/touch ix_dat1.5
/usr/bin/touch ix_dat1.6
/usr/bin/touch ix_dat1.7
/usr/bin/touch ix_dat1.8
/usr/bin/touch ix_dat1.9
cd /ix_dat2
/usr/bin/touch ix_dat2.1
/usr/bin/touch ix_dat2.10
/usr/bin/touch ix_dat2.11
/usr/bin/touch ix_dat2.12
/usr/bin/touch ix_dat2.13
/usr/bin/touch ix_dat2.14
/usr/bin/touch ix_dat2.15
/usr/bin/touch ix_dat2.16
/usr/bin/touch ix_dat2.17
/usr/bin/touch ix_dat2.18
/usr/bin/touch ix_dat2.19
/usr/bin/touch ix_dat2.2
/usr/bin/touch ix_dat2.20
/usr/bin/touch ix_dat2.21
/usr/bin/touch ix_dat2.22
/usr/bin/touch ix_dat2.23
/usr/bin/touch ix_dat2.24
/usr/bin/touch ix_dat2.25
/usr/bin/touch ix_dat2.3
/usr/bin/touch ix_dat2.4
/usr/bin/touch ix_dat2.5
/usr/bin/touch ix_dat2.6
/usr/bin/touch ix_dat2.7
/usr/bin/touch ix_dat2.8
/usr/bin/touch ix_dat2.9
cd /ix_dat3
/usr/bin/touch ix_dat3.1
/usr/bin/touch ix_dat3.10
/usr/bin/touch ix_dat3.11
/usr/bin/touch ix_dat3.12
/usr/bin/touch ix_dat3.13
/usr/bin/touch ix_dat3.14
/usr/bin/touch ix_dat3.15
/usr/bin/touch ix_dat3.16
/usr/bin/touch ix_dat3.17
/usr/bin/touch ix_dat3.18
/usr/bin/touch ix_dat3.19
/usr/bin/touch ix_dat3.2
/usr/bin/touch ix_dat3.3
/usr/bin/touch ix_dat3.4
/usr/bin/touch ix_dat3.5
/usr/bin/touch ix_dat3.6
/usr/bin/touch ix_dat3.7
/usr/bin/touch ix_dat3.8
/usr/bin/touch ix_dat3.9
cd /ix_idx1
/usr/bin/touch ix_idx1.1
/usr/bin/touch ix_idx1.2
/usr/bin/touch ix_idx1.3
/usr/bin/touch ix_idx1.4
/usr/bin/touch ix_idx1.5
/usr/bin/touch ix_idx1.6
/usr/bin/touch ix_idx1.7
cd /ix_idx2
total 0
/usr/bin/touch ix_idx2.1
/usr/bin/touch ix_idx2.2
/usr/bin/touch ix_idx2.3
/usr/bin/touch ix_idx2.4
/usr/bin/touch ix_idx2.5
cd /ix_idx3
/usr/bin/touch ix_idx3.1
/usr/bin/touch ix_idx3.2
/usr/bin/touch ix_idx3.3
/usr/bin/touch ix_idx3.4
cd /ix_llog
/usr/bin/touch ix_llog.1
cd /ix_plog
/usr/bin/touch ix_plog.1
cd /ix_root:
/usr/bin/touch ix_root.1
cd /ix_temp
/usr/bin/touch ix_temp.1
/usr/bin/touch ix_temp.2
/usr/bin/touch ix_temp.3
/usr/bin/touch ix_temp.4
cd /
/usr/bin/chown -R informix:informix ix*
/usr/bin/chmod R 660 ix*
/usr/bin/chmod 777 ix*
Reboot the system
# sync; sync; sync; shutdown -Fr
Step 5: Restore Informix database
1. Setup Informix running environment:
Login as root
# hostname ifx01
# ifconfig en0 192.168.108.60
login as USER Informix
$ . ./ids115.env ipdb
2. Restore Informix Database:
Insert Informix ‘ontape -s’ backup tape media into the Tape Drive
$ ontape r
Please mount tape 1 on /dev/rmt0 and press Return to continue:…
[enter]
Continue to restore? (y/n) y
Do you want to back up the logs? (y/n) n
Warning : If you intent to use J/Foundation or GLS for Unicode feature(GLU) with this server
instance, please make sure that your SHMBASE value specifies in onconfig is 0x40000000L or
above. Otherwise you will have problems while attaching or dynamically adding virtul shared
memory segments. Please refer to Server machine notes for more information.
Restore a level 1 archive (y/n) n
Do you want to restore log tapes? (y/n) n
/usr/apps/inf/ver115UC3/bin/onmode sy
Program over
3. Bring the database server online when the restore is over
$ onmode -m
Step 6: Bring Database and application online
Run Informix:
login as USER Informix
$ . ./ids115.env ipdb
$ oninit
Shutdown Informix:
$ onmode ky
Run Tuxedo Application:
login as ipgown
$ cd /usr/apps/ipg/ver001/srv/locus
$ . ./setenv.locus
$ tmboot y
Shutdown Tuxedo Application:
# tmshutdown -y
For Archive database server ardb DRP consideration
Archive database storage architecture:
IBM Informix Dynamic Server Version 11.50.UC3W2 -- On-Line -- Up 02:24:07 -- 929856 Kbytes
Dbspaces
address number flags fchunk nchunks pgsize flags owner name
50431810 1 0x1 1 1 4096 N informix rootdbs
5051dd50 2 0x1 2 1 4096 N informix llogdbs
5051deb0 3 0x1 3 2 4096 N informix tempdbs1
5138a018 4 0x1 4 1 4096 N informix plogdbs
5138a178 5 0x1 5 44 4096 N informix datadbs1
5138a2d8 6 0x1 27 48 4096 N informix datadbs2
5138a438 7 0x1 51 3 4096 N informix indxdbs1
5138a598 8 0x1 54 3 4096 N informix indxdbs2
8 active, 2047 maximum
Chunks
address chunk/dbs offset size free bpages flags pathname
50431970 1 1 0 62500 54561 PO-- /ach_root/ach_root.1
5138a6f8 2 2 0 250000 124947 PO-- /ach_llog/ach_llog.1
5138a8c8 3 3 0 250000 249547 PO-- /ach_temp/ach_temp.1
5138aa98 4 4 0 62500 2447 PO-- /ach_plog/ach_plog.1
5138ac68 5 5 0 250000 0 PO-- /ach_dat1/ach_dat1.1
5138ae38 6 5 0 250000 3 PO-- /ach_dat1/ach_dat1.2
5138b018 7 5 0 250000 1 PO-- /ach_dat1/ach_dat1.3
5138b1e8 8 5 0 250000 1 PO-- /ach_dat1/ach_dat1.4
5138b3b8 9 5 0 250000 1 PO-- /ach_dat1/ach_dat1.5
5138b588 10 5 0 250000 0 PO-- /ach_dat1/ach_dat1.6
5138b758 11 5 0 250000 0 PO-- /ach_dat1/ach_dat1.7
5138b928 12 5 0 250000 0 PO-- /ach_dat1/ach_dat1.8
5138baf8 13 5 0 250000 0 PO-- /ach_dat1/ach_dat1.9
5138bcc8 14 5 0 250000 0 PO-- /ach_dat1/ach_dat1.10
5138c018 15 5 0 250000 0 PO-- /ach_dat1/ach_dat1.11
5138c1e8 16 5 0 250000 0 PO-- /ach_dat1/ach_dat1.12
5138c3b8 17 5 0 250000 1 PO-- /ach_dat1/ach_dat1.13
5138c588 18 5 0 250000 0 PO-- /ach_dat1/ach_dat1.14
5138c758 19 5 0 250000 3 PO-- /ach_dat1/ach_dat1.15
5138c928 20 5 0 250000 0 PO-- /ach_dat1/ach_dat1.16
5138caf8 21 5 0 250000 0 PO-- /ach_dat1/ach_dat1.17
5138ccc8 22 5 0 250000 0 PO-- /ach_dat1/ach_dat1.18
5138d018 23 5 0 250000 0 PO-- /ach_dat1/ach_dat1.19
5138d1e8 24 5 0 250000 1 PO-- /ach_dat1/ach_dat1.20
5138d3b8 25 5 0 250000 1 PO-- /ach_dat1/ach_dat1.21
5138d588 26 5 0 250000 0 PO-- /ach_dat1/ach_dat1.22
5138d758 27 6 0 250000 1 PO-- /ach_dat2/ach_dat2.1
5138d928 28 6 0 250000 0 PO-- /ach_dat2/ach_dat2.2
5138daf8 29 6 0 250000 0 PO-- /ach_dat2/ach_dat2.3
5138dcc8 30 6 0 250000 0 PO-- /ach_dat2/ach_dat2.4
5138e018 31 6 0 250000 0 PO-- /ach_dat2/ach_dat2.5
5138e1e8 32 6 0 250000 0 PO-- /ach_dat2/ach_dat2.6
5138e3b8 33 6 0 250000 0 PO-- /ach_dat2/ach_dat2.7
5138e588 34 6 0 250000 0 PO-- /ach_dat2/ach_dat2.8
5138e758 35 6 0 250000 0 PO-- /ach_dat2/ach_dat2.9
5138e928 36 6 0 250000 0 PO-- /ach_dat2/ach_dat2.10
5138eaf8 37 6 0 250000 3 PO-- /ach_dat2/ach_dat2.11
5138ecc8 38 6 0 250000 0 PO-- /ach_dat2/ach_dat2.12
5138f018 39 6 0 250000 0 PO-- /ach_dat2/ach_dat2.13
5138f1e8 40 6 0 250000 0 PO-- /ach_dat2/ach_dat2.14
5138f3b8 41 6 0 250000 0 PO-- /ach_dat2/ach_dat2.15
5138f588 42 6 0 250000 0 PO-- /ach_dat2/ach_dat2.16
5138f758 43 6 0 250000 0 PO-- /ach_dat2/ach_dat2.17
5138f928 44 6 0 250000 0 PO-- /ach_dat2/ach_dat2.18
5138faf8 45 6 0 250000 0 PO-- /ach_dat2/ach_dat2.19
5138fcc8 46 6 0 250000 0 PO-- /ach_dat2/ach_dat2.20
51390018 47 6 0 250000 0 PO-- /ach_dat2/ach_dat2.21
513901e8 48 6 0 250000 0 PO-- /ach_dat2/ach_dat2.22
513903b8 49 6 0 250000 0 PO-- /ach_dat2/ach_dat2.23
51390588 50 6 0 250000 0 PO-- /ach_dat2/ach_dat2.24
51390758 51 7 0 250000 2 PO-- /ach_idx1/ach_idx1.1
51390928 52 7 0 250000 162 PO-- /ach_idx1/ach_idx1.2
51390af8 53 7 0 250000 245901 PO-- /ach_idx1/ach_idx1.3
51390cc8 54 8 0 250000 176857 PO-- /ach_idx2/ach_idx2.1
51391018 55 8 0 250000 249997 PO-- /ach_idx2/ach_idx2.2
513911e8 56 8 0 250000 249997 PO-- /ach_idx2/ach_idx2.3
513913b8 57 3 0 256000 255997 PO-- /ach_temp/ach_temp.2
51391588 58 5 0 250000 0 PO-- /ach_dat1/ach_dat1.23
51391758 59 5 0 250000 0 PO-- /ach_dat1/ach_dat1.24
51391928 60 5 0 250000 0 PO-- /ach_dat1/ach_dat1.25
51391af8 61 5 0 250000 0 PO-- /ach_dat1/ach_dat1.26
51391cc8 62 5 0 250000 0 PO-- /ach_dat1/ach_dat1.27
51395018 63 5 0 250000 0 PO-- /ach_dat1/ach_dat1.28
513951e8 64 6 0 250000 0 PO-- /ach_dat2/ach_dat2.25
513953b8 65 6 0 250000 1 PO-- /ach_dat2/ach_dat2.26
51395588 66 6 0 250000 0 PO-- /ach_dat2/ach_dat2.27
51395758 67 6 0 250000 0 PO-- /ach_dat2/ach_dat2.28
51395928 68 6 0 250000 0 PO-- /ach_dat2/ach_dat2.29
51395af8 69 6 0 250000 0 PO-- /ach_dat2/ach_dat2.30
51395cc8 70 5 0 250000 1 PO-- /ach_dat1/ach_dat1.29
51396018 71 5 0 250000 1 PO-- /ach_dat1/ach_dat1.30
513961e8 72 5 0 250000 1 PO-- /ach_dat1/ach_dat1.31
513963b8 73 5 0 250000 3 PO-- /ach_dat1/ach_dat1.32
51396588 74 5 0 250000 0 PO-- /ach_dat1/ach_dat1.33
51396758 75 6 0 250000 0 PO-- /ach_dat2/ach_dat2.31
51396928 76 6 0 250000 0 PO-- /ach_dat2/ach_dat2.32
51396af8 77 6 0 250000 0 PO-- /ach_dat2/ach_dat2.33
51396cc8 78 5 0 250000 2 PO-- /ach_dat1/ach_dat1.34
51397018 79 5 0 250000 1 PO-- /ach_dat1/ach_dat1.35
513971e8 80 6 0 250000 0 PO-- /ach_dat2/ach_dat2.34
513973b8 81 6 0 250000 0 PO-- /ach_dat2/ach_dat2.35
51397588 82 5 0 250000 5 PO-- /ach_dat1/ach_dat1.36
51397758 83 5 0 250000 5 PO-- /ach_dat1/ach_dat1.37
51397928 84 5 0 250000 125PO-- /ach_dat1/ach_dat1.38
51397af8 85 6 0 250000 0 PO-- /ach_dat2/ach_dat2.36
51397cc8 86 6 0 250000 0 PO-- /ach_dat2/ach_dat2.37
51398018 87 6 0 250000 0 PO-- /ach_dat2/ach_dat2.38
513981e8 88 6 0 250000 0 PO-- /ach_dat2/ach_dat2.39
513983b8 89 6 0 250000 0 PO-- /ach_dat2/ach_dat2.40
51398588 90 5 0 250000 397PO-- /ach_dat1/ach_dat1.39
51398758 91 6 0 250000 0 PO-- /ach_dat2/ach_dat2.41
51398928 92 6 0 250000 0 PO-- /ach_dat2/ach_dat2.42
51398af8 93 5 0 250000 141PO-- /ach_dat1/ach_dat1.40
51398cc8 94 6 0 250000 0 PO-- /ach_dat2/ach_dat2.43
51399018 95 5 0 250000 141PO-- /ach_dat1/ach_dat1.41
513991e8 96 6 0 250000 0 PO-- /ach_dat2/ach_dat2.44
513993b8 97 5 0 250000 0 PO-- /ach_dat1/ach_dat1.42
51399588 98 6 0 250000 0 PO-- /ach_dat2/ach_dat2.45
51399758 99 6 0 250000 0 PO-- /ach_dat2/ach_dat2.46
51399928 100 6 0 250000 34945 PO-- /ach_dat2/ach_dat2.47
51399af8 101 6 0 250000 184461PO-- /ach_dat2/ach_dat2.48
51399cc8 102 5 0 250000 45709 PO-- /ach_dat1/ach_dat1.43
5139a018 103 5 0 250000 249997PO-- /ach_dat1/ach_dat1.44
103 active, 2047 maximum
NOTE: The values in the "size" and "free" columns for DBspace chunks are
displayed in terms of "pgsize" of the DBspace to which they belong.
Expanded chunk capacity mode: disabled
Monthly data space change:
69,71c69,71
< 51390928 52 7 0 250000 1698 PO-- /ach_idx1/ach_idx1.2
< 51390af8 53 7 0 250000 249997 PO-- /ach_idx1/ach_idx1.3
< 51390cc8 54 8 0 250000 177497 PO-- /ach_idx2/ach_idx2.1
---
> 51390928 52 7 0 250000 162 PO-- /ach_idx1/ach_idx1.2
> 51390af8 53 7 0 250000 245901 PO-- /ach_idx1/ach_idx1.3
> 51390cc8 54 8 0 250000 176857 PO-- /ach_idx2/ach_idx2.1
117,119c117,119
< 51399928 100 6 0 250000 182409 PO-- /ach_dat2/ach_dat2.47
< 51399af8 101 6 0 250000 249997 PO-- /ach_dat2/ach_dat2.48
< 51399cc8 102 5 0 250000 211597 PO-- /ach_dat1/ach_dat1.43
---
> 51399928 100 6 0 250000 34945 PO-- /ach_dat2/ach_dat2.47
> 51399af8 101 6 0 250000 184461 PO-- /ach_dat2/ach_dat2.48
> 51399cc8 102 5 0 250000 45709 PO-- /ach_dat1/ach_dat1.43
For Create new chunks for Dbspace: datadbs1 and datadbs2:
Dbspace 6: 432406 - 219406= 213000(page) * 4 = 852000 (kilobyte)
Dbspace 5: 211597 - 45709 = 165888(page) * 4 = 663552 (kilobyte)
+
==== =========
1,515,552 (kilobyte)
For ip_arch03 to hold 18 months data, we still need 6 months (from Nov,2012) data space:
1515552 * 6 = 9,093,312 (kilobyte)
Download data in archive database server (ardb) for oldshipment
Get customer client number and the month which customer needs the data, for example: customer client
number is 137079, and the month which customer needs the data is 2011/04.
1. login as user : informix
$. ./ids115.env ardb
$ oninit
2. Change user to operator, such as lchen
Prepare a scripts ‘entry.sql’ , and ‘entry.head’ in directory: /home/lchen/informix/download
$ cd /home/lchen/informix/download
$ mkdir 137079
$ dbaccess entry.sql
$ cat entry.head ./137079/201104.tmp > download137079
3. Send download137079 to customer
File sample: entry.sql
CONNECT TO ' ip_arch03@ardb ' USER 'lchen' USING 'admini@12';
unload to /home/lchen//informix/download/137079/201104.tmp delimiter '^'
select
x0.acctsecurno,
x0.transno,
x0.b3type,
x0.purchaseorder1,
x0.purchaseorder2,
x0.vendorname,
x0.vendorstate,
x0.vendorzip,
x1.description,
x0.cargcntrlno,
round(x0.freight , 2 ),
x3.description,
round(x0.weight , 2 ),
round(x0.cargcntrlqty , 2 ),
x0.locationofgoods,
x4.description,
x0.shipvia,
x0.containerno,
x0.billoflading,
x2.description,
round(x0.totb3vfd , 2 ),
round(x0.totb3duty , 2 ),
round(x0.totb3sima, 2 ),
round(x0.totb3exctax , 2 ),
round(x0.totb3gst , 2 ),
x0.approveddate [1,10],
x0.reldate [1,10],
"ipgown".p_invbrchno(x0.liibrchno ) || "ipgown".p_invrefno(x0.liirefno),
-- subheader
x13.b3subno,
x13.vendorname,
x13.vendorstate,
x13.vendorzip,
x11.description,
x12.description,
x10.description,
x13.timelim,
x13.timelimunit,
x13.currcode,
x13.shipdate,
-- line
x20.b3lineno,
x20.partdesc,
x20.hsno,
x20.tariffcode,
x20.vfdcode,
x20.exctaxexmptcode,
x20.simacode,
x20.gstexemptcode,
x20.oicspecialaut,
round(x20.convtoqty1 , 2 ),
x20.advaldutyrateumeas,
round(x20.convtoqty2, 2 ),
x20.spcdutyrateumeas,
round(x20.convtoqty3 , 2 ),
x20.excdutyrateumeas,
round(x20.exchgrate , 6 ),
round(x20.advalrate1 , 2 ),
round(x20.spcrate, 2 ),
round(x20.excdutyrate , 6 ),
round(x20.exctaxrate , 2 ),
round(x20.gstrate, 2 ),
round(x20.vfcc , 2 ),
round(x20.vfd , 2 ),
round(x20.advalduty, 2 ),
round(x20.spcduty , 2 ),
round(x20.excduty , 2 ),
round(x20.exctax, 2 ),
round(x20.simaval , 2 ),
round(x20.vft , 2 ),
round(x20.gst , 2 ),
x20.linecomment,
-- recap
x30.ccipageno,
x30.ccilineno,
x30.uom,
round(x30.quantity, 2 ),
round(x30.amount , 2 ),
x30.proddesc
from "informix".b3 x0,
"informix".canct_off x1 ,
"informix".canct_off x2 ,
"informix".usport_exit x3,
"informix".transp_mode x4,
"informix".vw_tarifftrtmnt x10,
"informix".ctry_code x11 ,
"informix".ctry_code x12 ,
"informix".b3_subheader x13,
"informix".b3_line x20,
"informix".b3_recap_details x30
where ( x0.liiclientno in ( 203926 ) ) and
( x0.approveddate like '1999/01/%' ) and
( x0.b3iid = x13.b3iid ) and
( x13.b3subiid = x20.b3subiid ) and
( x20.b3lineiid = x30.b3lineiid ) and
((((x0.usportexit = x3.portexit ) AND
(x0.modetransp = x4.transpmode )) AND
(x0.custoff = x1.canctoffcode ) ) AND
(x0.portunlading = x2.canctoffcode )) AND
(((x13.tarifftrtmnt = x10.tarifftrtmnt ) AND
(x13.placeexp = x11.ctrycode ) ) AND
(x13.ctryorigin = x12.ctrycode )) ;
File sample: entry.head
AcctSecNo^TransactionNo^B3Type^PO1^PO2^VendorName^State^Zip^CustomsOffice^CargoCntrlNo^Freight^USPortExit^Weight^Ca
rgoCntrlQty^LocOfGoods^ModeOfTrans^ShipVia^ContainerNo^BillOfLading^PortOfUnlading^TotalVFD^TotalDuty^TotalSIMA^TotalE
xciseTax^TotalGST^ApprovedDate^ReleaseDate^InvoiceNo^SubHdrNo^SubVenName^SubVenState^SubVenZip^PlaceOfExport^Cntry
OfOrigin^TariffTrtmnt^TimeLimit^TimeLimitUnit^CurrencyCode^ShipDate^LineNo^Description^HSNo^TariffCode^VFDCode^TaxRef
No^SIMACode^GSTExemptCode^OIC^ReportingQty^ReportingUnitMeas^SpecificQty^SpecificUnitMeas^ExciseQty^ExciseUnitMeas^E
xchangeRate^AdValoremRate^SpecificRate^ExciseDutyRate^ExciseTaxRate^GSTRate^VFCC^VFD^AdValoremDuty^SpecificDuty^Excis
eDuty^ExciseTax^SIMAValue^VFT^GST^Comments^CCI_PgNo^CCI_LineNo^Recap_UOM^Recap_Qty^Recap_Amt^Recap_ProdDesc^
Using a script to download B3 old data:
Insight DB refresh (from ifx01 to ipdev) procedure
1. On IFX01, copy Informix DB backup image (ontape -s -L 0) from ifx01 to ipdev
/archbkup/dbbkup/scp_bkup_image.sh
remote_host=ipdev
remote_dir=/dbbackup
image_dir=/livebkup
if [[ $(hostname) == "$remote_host" ]]; then return; fi
cd ${image_dir}
ls ipdb_bkup_L0.* |
while read image
do
scp $image $remote_host:$remote_dir
[[ $? -eq 0 ]] && \
mail -s "$image scp completed" lchen@livingstonintl.com < /dev/null
done
exit 0
2. On IPDEV, restore informix DB on IPDEV
[lchen@ipdev /dbbackup] $ sudo su - informix
//lchen@ipdev:/login/infown > . ./ids115.env systestdb
Server systestdb environment ...
//lchen@ipdev:/login/infown > onstat -
IBM Informix Dynamic Server Version 11.50.UC3W2 -- On-Line -- Up 00:30:29 -- 2051200 Kbytes
//lchen@ipdev:/login/infown > echo $INFORMIXDIR
/usr/apps/inf/ver115UC3
//lchen@ipdev:/login/infown > echo $INFORMIXSERVER
systestdb
//lchen@ipdev:/login/infown > onmode -ky
//lchen@ipdev:/login/infown > ontape -r -t /dbbackup/ipdb_bkup_L0.201603040001
Please mount tape 1 on /dbbackup/ipdb_bkup_L0.201603040001 and press Return to continue ...
Archive Tape Information
Tape type: Archive Backup Tape
Online version: IBM Informix Dynamic Server Version 11.50.UC3W2
Archive date: Fri Mar 4 00:01:00 2016
User id: informix
Terminal id: ?
Archive level: 0
Tape device: /livebkup/ipdb_bkup_L0.201603040001
Tape blocksize (in k): 1024
Tape size (in k): 132000000
Tape number in series: 1
Spaces to restore:1 [rootdbs
]
2 [llogdbs
]
3 [plogdbs
]
4 [datadbs1
]
5 [datadbs2
]
6 [indxdbs1
]
7 [indxdbs2
]
8 [datadbs3
]
9 [indxdbs3
]
Archive Information
Informix Dynamic Server Copyright(C) 1986-1998 Informix Software, Inc.
Initialization Time 09/17/2002 09:42:59
System Page Size 4096
Version 16
Index Page Logging OFF
Archive CheckPoint Time 03/04/2016 00:01:00
Dbspaces
number flags fchunk nchunks flags owner name
1 1 1 1 N informix rootdbs
2 1 2 1 N informix llogdbs
3 1 3 1 N informix plogdbs
4 1 4 29 N informix datadbs1
5 1 19 25 N informix datadbs2
6 1 34 7 N informix indxdbs1
7 1 37 6 N informix indxdbs2
8 2001 40 1 N T informix tempdbs1
9 2001 41 1 N T informix tempdbs2
10 2001 42 1 N T informix tempdbs3
11 1 59 20 N informix datadbs3
12 1 75 4 N informix indxdbs3
13 2001 78 1 N T informix tempdbs4
Chunks
chk/dbs offset size free bpages flags pathname
1 1 0 55000 33333 PO-- /ix_root/ix_root.1
2 2 0 250000 69947 PO-- /ix_llog/ix_llog.1
3 3 0 64000 1447 PO-- /ix_plog/ix_plog.1
4 4 0 250000 151 PO-- /ix_dat1/ix_dat1.1
5 4 0 250000 1172 PO-- /ix_dat1/ix_dat1.2
6 4 0 250000 16 PO-- /ix_dat1/ix_dat1.3
7 4 0 250000 1672 PO-- /ix_dat1/ix_dat1.4
8 4 0 250000 0 PO-- /ix_dat1/ix_dat1.5
9 4 0 250000 0 PO-- /ix_dat1/ix_dat1.6
10 4 0 250000 0 PO-- /ix_dat1/ix_dat1.7
11 4 0 250000 0 PO-- /ix_dat1/ix_dat1.8
12 4 0 250000 323 PO-- /ix_dat1/ix_dat1.9
13 4 0 250000 1800 PO-- /ix_dat1/ix_dat1.10
14 4 0 250000 72 PO-- /ix_dat1/ix_dat1.11
15 4 0 250000 2736 PO-- /ix_dat1/ix_dat1.12
16 4 0 250000 1720 PO-- /ix_dat1/ix_dat1.13
17 4 0 250000 0 PO-- /ix_dat1/ix_dat1.14
18 4 0 250000 0 PO-- /ix_dat1/ix_dat1.15
19 5 0 250000 1 PO-- /ix_dat2/ix_dat2.1
20 5 0 250000 541 PO-- /ix_dat2/ix_dat2.2
21 5 0 250000 5 PO-- /ix_dat2/ix_dat2.3
22 5 0 250000 5 PO-- /ix_dat2/ix_dat2.4
23 5 0 250000 5 PO-- /ix_dat2/ix_dat2.5
24 5 0 250000 133 PO-- /ix_dat2/ix_dat2.6
25 5 0 250000 133 PO-- /ix_dat2/ix_dat2.7
26 5 0 250000 133 PO-- /ix_dat2/ix_dat2.8
27 5 0 250000 645 PO-- /ix_dat2/ix_dat2.9
28 5 0 250000 5 PO-- /ix_dat2/ix_dat2.10
29 5 0 250000 17869 PO-- /ix_dat2/ix_dat2.11
30 5 0 250000 26733 PO-- /ix_dat2/ix_dat2.12
31 5 0 250000 5133 PO-- /ix_dat2/ix_dat2.13
32 5 0 250000 18573 PO-- /ix_dat2/ix_dat2.14
33 5 0 250000 16525 PO-- /ix_dat2/ix_dat2.15
34 6 0 250000 1 PO-- /ix_idx1/ix_idx1.1
35 6 0 250000 6 PO-- /ix_idx1/ix_idx1.2
36 6 0 250000 3 PO-- /ix_idx1/ix_idx1.3
37 7 0 250000 3 PO-- /ix_idx2/ix_idx2.1
38 7 0 250000 1 PO-- /ix_idx2/ix_idx2.2
39 7 0 250000 5 PO-- /ix_idx2/ix_idx2.3
40 8 0 250000 246197 PO-- /ix_temp/ix_temp.1
41 9 0 250000 246197 PO-- /ix_temp/ix_temp.2
42 10 0 250000 246197 PO-- /ix_temp/ix_temp.3
43 7 0 250000 13 PO-- /ix_idx2/ix_idx2.4
44 5 0 250000 23149 PO-- /ix_dat2/ix_dat2.16
45 6 0 250000 13 PO-- /ix_idx1/ix_idx1.4
46 5 0 250000 53357 PO-- /ix_dat2/ix_dat2.17
47 5 0 250000 229517 PO-- /ix_dat2/ix_dat2.18
48 6 0 250000 32157 PO-- /ix_idx1/ix_idx1.5
49 7 0 250000 122573 PO-- /ix_idx2/ix_idx2.5
50 5 0 250000 233613 PO-- /ix_dat2/ix_dat2.19
51 6 0 250000 249997 PO-- /ix_idx1/ix_idx1.6
52 5 0 250000 249997 PO-- /ix_dat2/ix_dat2.20
53 5 0 250000 247437 PO-- /ix_dat2/ix_dat2.21
54 5 0 250000 249997 PO-- /ix_dat2/ix_dat2.22
55 5 0 250000 249997 PO-- /ix_dat2/ix_dat2.23
56 6 0 250000 249997 PO-- /ix_idx1/ix_idx1.7
57 5 0 250000 249997 PO-- /ix_dat2/ix_dat2.24
58 5 0 250000 217229 PO-- /ix_dat2/ix_dat2.25
59 11 0 250000 3 PO-- /ix_dat3/ix_dat3.1
60 11 0 250000 1 PO-- /ix_dat3/ix_dat3.2
61 11 0 250000 1 PO-- /ix_dat3/ix_dat3.3
62 11 0 250000 1 PO-- /ix_dat3/ix_dat3.4
63 11 0 250000 1 PO-- /ix_dat3/ix_dat3.5
64 11 0 250000 1 PO-- /ix_dat3/ix_dat3.6
65 11 0 250000 1 PO-- /ix_dat3/ix_dat3.7
66 11 0 250000 1 PO-- /ix_dat3/ix_dat3.8
67 11 0 250000 1 PO-- /ix_dat3/ix_dat3.9
68 11 0 250000 1 PO-- /ix_dat3/ix_dat3.10
69 11 0 250000 1 PO-- /ix_dat3/ix_dat3.11
70 11 0 250000 5 PO-- /ix_dat3/ix_dat3.12
71 11 0 250000 5 PO-- /ix_dat3/ix_dat3.13
72 11 0 250000 5 PO-- /ix_dat3/ix_dat3.14
73 11 0 250000 5 PO-- /ix_dat3/ix_dat3.15
74 11 0 250000 5 PO-- /ix_dat3/ix_dat3.16
75 12 0 250000 6 PO-- /ix_idx3/ix_idx3.1
76 12 0 250000 5 PO-- /ix_idx3/ix_idx3.2
77 12 0 250000 60833 PO-- /ix_idx3/ix_idx3.3
78 13 0 250000 246197 PO-- /ix_temp/ix_temp.4
79 11 0 250000 5 PO-- /ix_dat3/ix_dat3.17
80 11 0 250000 5 PO-- /ix_dat3/ix_dat3.18
81 4 0 250000 0 PO-- /ix_dat1/ix_dat1.16
82 4 0 250000 26312 PO-- /ix_dat1/ix_dat1.17
83 4 0 250000 0 PO-- /ix_dat1/ix_dat1.18
84 11 0 250000 233853 PO-- /ix_dat3/ix_dat3.19
85 12 0 250000 249997 PO-- /ix_idx3/ix_idx3.4
86 4 0 250000 0 PO-- /ix_dat1/ix_dat1.19
87 4 0 250000 0 PO-- /ix_dat1/ix_dat1.20
88 4 0 250000 0 PO-- /ix_dat1/ix_dat1.21
89 4 0 250000 156301 PO-- /ix_dat1/ix_dat1.22
90 11 0 250000 249997 PO-- /ix_dat3/ix_dat3.20
91 4 0 250000 0 PO-- /ix_dat1/ix_dat1.23
92 4 0 250000 0 PO-- /ix_dat1/ix_dat1.24
93 4 0 250000 0 PO-- /ix_dat1/ix_dat1.25
94 7 0 250000 249997 PO-- /ix_idx2/ix_idx2.6
95 4 0 250000 0 PO-- /ix_dat1/ix_dat1.26
96 4 0 250000 0 PO-- /ix_dat1/ix_dat1.27
97 4 0 250000 0 PO-- /ix_dat1/ix_dat1.28
98 4 0 250000 249997 PO-- /ix_dat4/ix_dat4.1
Continue restore? (y/n)y
Do you want to back up the logs? (y/n)n
WARNING : If you intend to use J/Foundation or GLS for Unicode feature(GLU) with this Server instance,
please make sure that your SH
MBASE value specifies in onconfig is 0x40000000L or above. Otherwise you will have problems while
attaching or dynamically adding vi
rtual shared memory segments. Please refer to Server machine notes for more information.
Restore a level 1 archive (y/n) n
Do you want to restore log tapes? (y/n)n
/usr/apps/inf/ver115UC3/bin/onmode -sy
Program over.
//lchen@ipdev:/login/infown > onmode -m
//lchen@ipdev:/login/infown > onstat -
IBM Informix Dynamic Server Version 11.50.UC3W2 -- On-Line -- Up 00:11:45 -- 2051200 Kbytes
//lchen@ipdev:/login/infown > echo "rename database ip_0p to ip_systest"|dbaccess
Database renamed.
Manually Re-load data files
Data loader programs on this Insight system are run by crontab automatically. You should understand and know where
the data files are, just move or copy the data files to the right directory where the data-file-loader programs search for
the data file:
Data loader program
Data file and directory
runner.10.ksh
runner.all.ksh ( all Qs except 71)
runner.71.ksh
/dmqjtmp/rcp/*.vax
/dmqjtmp/dmqvax/token/*.vax
(Note: the actual data files are on /dmqjtmp/rcp, and you should
touch the token files with the same filename of data files on
dmqjtmp/dmqvax/token, then the data loader program will
process these data files )
StartInsightBillingUpload.ksh
/dmqjtmp/rcp/*.recv
StartInsightUpload.ksh
/insight/local/scripts/iccdataupload/in/*.txt
StartInsightSetExpiryDates.ksh
/insight/local/scripts/ICCSetExpiryDates/*.txt
/var/adm/wtmp (who temp file) too large
#cp /var/adm/wtmp /recyclebox/lchen
#cat /dev/null > /var/adm/wtmp
There are no subheaders, line, or recaps in the informix database for this
transaction number. Please reload it.
The wip appears to be from Jan 2014. The wip will need to be restored from the funnel file then we need to recreate the
transaction. The history is below. It was completed, but no recaps went over. No errors showed up.
The programmers have a utility that allows them to recreate recaps from headers. We need to pull the funnel file from
archives and then have them recreate recaps. Terry Bolton knows how. Everest knows where old funnel files are
located. I think it’s something like dsa3031.
hs_duty_rate table refresh
1. stopped data processor (runner) before new Q32 data files come( say 2:00PM )
2. new Q32 data files came at, say, 16:30PM
3. check you have enough free space ( chunk files) on dbspace: datedbs1, this is the default dbspacse of database
ip_0p, on which table hs_duty_rate and hs_uom is created
4. Purged hs_duty_rate and hs_uom at ,say, 19:30PM, and start runner at about 20:00PM
[lchen@ifx01 /home/lchen/tools/db/informix/etc] $ cat unload_HS.sql
CONNECT TO 'ip_0p@ipdb' USER 'lchen' USING 'admin12';
-- CONNECT TO 'ip_0p@ipdb' USER 'informix' USING 'infxrmvb';
UNLOAD TO "/recyclebox/lchen/hs_duty_rate.20150110" SELECT * FROM hs_duty_rate;
UNLOAD TO "/recyclebox/lchen/hs_uom.20150110" SELECT * FROM hs_uom;
-- TRUNCATE hs_duty_rate;
-- TRUNCATE hs_uom;
Reload HS tables to rollback if you find anything wrong after purging
[lchen@ifx01 /home/lchen/tools/db/informix/etc] $ cat load_HS.sql
CONNECT TO 'ip_0p@ipdb' USER 'informix' USING 'infxrmvb'
-- SET CONSTRAINTS,INDEXES,TRIGGERS FOR hs_duty_rate DISABLED;
-- TRUNCATE hs_duty_rate;
-- DROP INDEX 58858_619280;
-- ALTER TABLE hs_duty_rate DROP CONSTRAINT u58858_619280;
-- ALTER TABLE hs_duty_rate TYPE(RAW);
LOAD FROM "/recyclebox/lchen/hs_duty_rate.20150110" INSERT INTO hs_duty_rate;
-- ALTER TABLE hs_duty_rate TYPE(STANDARD);
-- ALTER TABLE hs_duty_rate ADD CONSTRAINT primary key (hsno,hstarifftrtmnt,effdate);
-- CREATE UNIQUE INDEX ON hs_duty_rate (hsno,hstarifftrtmnt,effdate);
-- SET CONSTRAINTS,INDEXES,TRIGGERS FOR hs_duty_rate ENABLED;
-- DISCONNECT CURRENT;
5. Q32 date filed completed loading at 23:40PM
6. /insight/local/scripts/ICCSetExpiryDates/StartInsightSetExpiryDates.ksh will update Table hs_duty_rate
Data processing LOGS
Find Data File process log in /usr/apps/dmq/beta,if you find some non zero-size file like ierr010.xxxx, it must mean some
data process errors there. For history LOGS, /dmqjtmp/archiveBetaLog/beta
Storage consideration and Add Database storage space(chunck)
root@ifx01:/ # lspv
hdisk2 00ca32fde4198d51 livedbvg active
hdisk3 00ca32fde4198fc0 archdbvg active
hdisk4 00ca32fde41a128f appsvg active
hdisk0 00ca32fd35a97b39 rootvg active
hdisk1 00ca32fd35a97d46 rootvg active
hdisk5 00ca32fdae1bdd5b archdbvg active
hdisk6 00ca32fdae1d5a2a archdbvg active
root@ifx01:/ # mpio_get_config -Av
Warning: Unable to open message catalog.
Frame id 0:
Storage Subsystem worldwide name: 60ab800264d8a0000456b76c6
Controller count: 2
Partition count: 1
Partition 0:
Storage Subsystem Name = 'LOIS_Imaging'
hdisk# LUN # Ownership User Label
hdisk2 0 A (preferred) lun067
hdisk3 1 B (preferred) lun068
hdisk4 2 A (preferred) lun069
hdisk5 3 B (preferred) lun106
hdisk6 4 A (preferred) lun107
We add 4G data files (chunk) to datadbs1 and 4G data files (chunk) to datadbs2
# touch /ach_dat1/ach_dat1.45
# touch /ach_dat1/ach_dat1.46
# touch /ach_dat1/ach_dat1.47
# touch /ach_dat1/ach_dat1.48
# touch /ach_dat2/ach_dat2.49
# touch /ach_dat2/ach_dat2.50
# touch /ach_dat2/ach_dat2.51
# touch /ach_dat2/ach_dat2.52touch
# chmod 660 /ach_dat1/ach_dat1.45
# chmod 660 /ach_dat1/ach_dat1.46
# chmod 660 /ach_dat1/ach_dat1.47
# chmod 660 /ach_dat1/ach_dat1.48
# chmod 660 /ach_dat2/ach_dat2.49
# chmod 660 /ach_dat2/ach_dat2.50
# chmod 660 /ach_dat2/ach_dat2.51
# chmod 660 /ach_dat2/ach_dat2.52
# chown informix:informix /ach_dat1/ach_dat1.45
# chown informix:informix /ach_dat1/ach_dat1.46
# chown informix:informix /ach_dat1/ach_dat1.47
# chown informix:informix /ach_dat1/ach_dat1.48
# chown informix:informix /ach_dat2/ach_dat2.49
# chown informix:informix /ach_dat2/ach_dat2.50
# chown informix:informix /ach_dat2/ach_dat2.51
# chown informix:informix /ach_dat2/ach_dat2.52
# su informix
For example: onspaces -a datadbs1 -p /ix_dat/ix_dat.2 -o 0 -s 1000000
$ onspaces -a datadbs1 -p /ach_dat1/ach_dat1.45 -o 0 -s 1000000
$ onspaces -a datadbs1 -p /ach_dat1/ach_dat1.46 -o 0 -s 1000000
$ onspaces -a datadbs1 -p /ach_dat1/ach_dat1.47 -o 0 -s 1000000
$ onspaces -a datadbs1 -p /ach_dat1/ach_dat1.48 -o 0 -s 1000000
$ onspaces -a datadbs2 -p /ach_dat2/ach_dat2.49 -o 0 -s 1000000
$ onspaces -a datadbs2 -p /ach_dat2/ach_dat2.50 -o 0 -s 1000000
$ onspaces -a datadbs2 -p /ach_dat2/ach_dat2.51 -o 0 -s 1000000
$ onspaces -a datadbs2 -p /ach_dat2/ach_dat2.52 -o 0 -s 1000000
All addition chunks to /ix_dat4
# chfs -a size=+1G /ix_dat4
$ sudo su - informix
$ . ./ids115.env ipdb
$ cd /ix_dat4
$ touch ix_dat4.12
$ chmod 660 ix_dat4.12
$ onspaces -a datadbs1 -p /ix_dat4/ix_dat4.1 -o 0 -s 1000000
$ onspaces -a datadbs3 -p /ix_dat4/ix_dat4.2 -o 0 -s 1000000
$ onspaces -a indxdbs3 -p /ix_dat4/ix_dat4.3 -o 0 -s 1000000
$ onspaces -a datadbs3 -p /ix_dat4/ix_dat4.4 -o 0 -s 1000000
$ onspaces -a datadbs3 -p /ix_dat4/ix_dat4.5 -o 0 -s 1000000
$ onspaces -a datadbs3 -p /ix_dat4/ix_dat4.6 -o 0 -s 1000000
$ onspaces -a indxdbs2 -p /ix_dat4/ix_dat4.7 -o 0 -s 1000000
$ onspaces -a datadbs1 -p /ix_dat4/ix_dat4.8 -o 0 -s 1000000
$ onspaces -a datadbs3 -p /ix_dat4/ix_dat4.9 -o 0 -s 1000000
$ onspaces -a datadbs1 -p /ix_dat4/ix_dat4.10 -o 0 -s 1000000
$ onspaces -a datadbs1 -p /ix_dat4/ix_dat4.11 -o 0 -s 1000000
$ onspaces -a datadbs3 -p /ix_dat4/ix_dat4.12 -o 0 -s 1000000
[lchen@ifx01 /home/lchen] $ echo "select count(*) from tariff"|dbaccess ip_0p
28274471 (2016-11-17 )
29513860 (2017-02-06)
30973361 (2017-04-11)
# chfs -a size=+5G /livebkup
Database closed.
Reclaiming Unused Space Within an Extent
tblspace itself is the sum of allocated extents, not a single, contiguous allocation of space. The database server tracks tblspaces
independently of the database.
Once the database server allocates disk space to a tblspace as part of an extent, that space remains dedicated to the tblspace.
Even if all extent pages become empty after you delete data, the disk space remains unavailable for use by other tables.
Important:
When you delete rows in a table, the database server reuses that space to insert new rows into the same table. This section
describes procedures to reclaim unused space for use by other tables.
You might want to resize a table that does not require the entire amount of space that was originally allocated to it. You can
reallocate a smaller dbspace and release the unneeded space for other tables to use.
As the database server administrator, you can reclaim the disk space in empty extents and make it available to other users by
rebuilding the table. To rebuild the table, use any of the following SQL statements:
ALTER INDEX
UNLOAD and LOAD
ALTER FRAGMENT
Reclaiming Space in an Empty Extent with ALTER INDEX
If the table with the empty extents includes an index, you can execute the ALTER INDEX statement with the TO CLUSTER
clause. Clustering an index rebuilds the table in a different location within the dbspace. All the extents associated with the
previous version of the table are released. Also, the newly built version of the table has no empty extents.
Example:
//lchen@ipdev:/login/infown > cat alterindex.sql
database sysadmin;
alter index ix_ph_run_01 to cluster;
close database;
For more information about the syntax of the ALTER INDEX statement, see the IBM Informix Guide to SQL: Syntax. For more
information about clustering, see Clustering.
rootdbs full: Reclaiming Space in an Empty Extent with the UNLOAD and LOAD Statements or the onunload and
onload Utilities
If the table does not include an index, you can unload the table, re-create the table (either in the same dbspace or in another
one), and reload the data with the UNLOAD and LOAD statements or the onunload and onload utilities.
Example:
//lchen@ipdev:/login/infown > cat reclaimspace.sql
database sysadmin;
unload to "/recyclebox/lchen/ph_run" select * from ph_run;
truncate ph_run;
-- load from "/recyclebox/lchen/ph_run" insert into ph_run;
close database;
Please check online log
$onstat -rm
it will ask you to:
$oncheck -cDI sysadmin:"informix".ph_run
reads all pages, except for blobpages and sbpages, from the tblspace for the specified database,
table, fragment, or fragmants, and checks each pages for consistency, This command compares entries
in the bitmap page to the pages to verify mapping.
The BEGIN WORK statement is valid only in a database that supports transaction logging. This statement is not valid in an
ANSI-compliant database. Each row that an UPDATE, DELETE, INSERT, or MERGE statement affects during a transaction is
locked and remains locked throughout the transaction. A transaction that contains many such statements or that contains
statements that affect many rows can exceed the limits that your operating system or the database server configuration imposes
on the number of simultaneous locks. If no other user is accessing the table, you can avoid locking limits and reduce locking
overhead by locking the table with the LOCK TABLE statement after you begin the transaction. Like other locks, this table lock is
released when the transaction terminates. The example of a transaction on “Example of BEGIN WORK” on page 2-75 includes a
LOCK TABLE statement. Important: Issue the BEGIN WORK statement only if a transaction is not in progress. If you issue a
BEGIN WORK statement while you are in a transaction, the database server returns an error.
//lchen@ipdev:/login/infown > vi reclaimspace.sql
"reclaimspace.sql" 8 lines, 194 characters
database sysadmin;
BEGIN WORK;
lock table ph_run;
-- unload to "/recyclebox/lchen/ph_run" select * from ph_run;
-- truncate ph_run;
load from "/recyclebox/lchen/ph_run" insert into ph_run;
COMMIT WORK;
close database;
For further information about selecting the correct utility or statement, see the IBM Informix Migration Guide. For more
information about the syntax of the UNLOAD and LOAD statements, see the IBM Informix Guide to SQL: Syntax.
Releasing Space in an Empty Extent with ALTER FRAGMENT
You can use the ALTER FRAGMENT statement to rebuild a table, which releases space within the extents that were allocated
to that table. For more information about the syntax of the ALTER FRAGMENT statement, see the IBM Informix Guide to SQL:
Syntax.
There is an environment variable IFX_DIRTY_WAIT that is used to define to the number of seconds a DDL statement will wait
for existing dirty readers to finish their access to the target table. When set, the variable also prevents new dirty readers from
accessing the table.
The variable can be set in the Informix® Dynamic Server™ environment (before the database is started) or in the client
environment. Setting it on the client side will override the setting on the server side.
Example:
IFX_DIRTY_WAIT=n
n
is a positive integer representing the number of seconds that your session will wait for a given dirty reader to finish accessing the
target table. Ifthis amount of time is not enough time, the session returns the same error as it would without the variable being
set.
Example:
Using the UNIX Korn shell and setting the timeout value to 300 seconds, you would set the environment variable as follows:
$ export IFX_DIRTY_WAIT=300
$ dbaccess sysadmin alterindex
Connect Session
When a client application connects to the database server, the database server performs the following tasks:
Creates a session structure, called a session control block, to hold information about the connection and the user
Creates a thread structure, called a thread-control block (TCB), to hold information about the current state of the thread
Determines the server-processing locale, the locale to use for SQL statements during the session
Initializes a primary thread, called the session thread (or sqlexec thread), to handle client-application requests
When the client application successfully establishes a connection, it begins a session. Only a client application can begin a
session. The session context consists of data structures and state information that are associated with a specific session, such
as cursors, save sets, and user data.
TIPS: Session data: When a client application requests a connection to the database server, the database server begins a
session with the client and creates a data structure for the session in shared memory called the session-control block. The
session-control block stores the session ID, the user ID, the process ID of the client, the name of the host computer, and various
status flags
Monitoring locks
You can analyze information about locks and monitor locks by viewing information in the internal lock table that contains stored
locks.
View the lock table with onstat -k. Figure 1 shows sample output for onstat -k.
Figure 1. onstat -k output
Locks
address wtlist owner lklist type tblsnum rowid key#/bsiz
300b77d0 0 40074140 0 HDR+S 10002 106 0
300b7828 0 40074140 300b77d0 HDR+S 10197 123 0
300b7854 0 40074140 300b7828 HDR+IX 101e4 0 0
300b78d8 0 40074140 300b7854 HDR+X 101e4 102 0
4 active, 5000 total, 8192 hash buckets
In this example, a user is inserting one row in a table. The user holds the following locks (described in the order shown):
A shared lock on the database
A shared lock on a row in the systables system catalog table
An intent-exclusive lock on the table
An exclusive lock on the row
To determine the table to which the lock applies, execute the following SQL statement. For tblsnum, substitute the value shown
in the tblsnum field in the onstat -k output.
SELECT *
FROM SYSTABLES
WHERE HEX(PARTNUM) = "tblsnum";
Where tblsnum is the modified value that onstat -k returns. For example, if onstat -k returns 10027f, tbslnum is
0x0010027F.
You can also query the syslocks table in the sysmaster database to obtain information about each active lock. The syslocks
table contains the following columns.
Column
Description
dbsname
Database on which the lock is held
tabname
Name of the table on which the lock is held
rowidlk
ID of the row on which the lock is held (0 indicates a table lock.)
keynum
The key number for the row
type
Type of lock
owner
Session ID of the lock owner
waiter
Session ID of the first waiter on the lock
Monitoring lock waits and lock errors: You can view information about sessions, lock usage, and lock waits.
Summary: onstat -k will show you the locks. The owner column of onstat -k has the same value as the onstat -u column
address. The onstat -u output should help you identify the owner of the locks in onstat -k. The owner's username will be
listed in the user column of onstat -u The onstat - u output also has a sessid column. You can use the sessid value to find out
more about the session that holds the lock. Run onstat -g ses <sessid value>.
If the application executes SET LOCK MODE TO WAIT, the database server waits for a lock to be released instead of
returning an error. An unusually long wait for a lock can give users the impression that the application is hanging.
In Figure 1, the onstat -u output shows that session ID 84 is waiting for a lock (L in the first column of the Flags field). To
find out the owner of the lock, use the onstat -k command.
Figure 1. onstat -u output that shows lock usage
onstat -u
Userthreads
address flags sessid user tty wait tout locks nreads nwrites
40072010 ---P--D 7 informix - 0 0 0 35 75
400723c0 ---P--- 0 informix - 0 0 0 0 0
40072770 ---P--- 1 informix - 0 0 0 0 0
40072b20 ---P--- 2 informix - 0 0 0 0 0
40072ed0 ---P--F 0 informix - 0 0 0 0 0
40073280 ---P--B 8 informix - 0 0 0 0 0
40073630 ---P--- 9 informix - 0 0 0 0 0
400739e0 ---P--D 0 informix - 0 0 0 0 0
40073d90 ---P--- 0 informix - 0 0 0 0 0
40074140 Y-BP--- 81 lsuto 4 50205788 0 4 106 221
400744f0 --BP--- 83 jsmit - 0 0 4 0 0
400753b0 ---P--- 86 worth - 0 0 2 0 0
40075760 L--PR-- 84 jones 3 300b78d8 -1 2 0 0
13 active, 128 total, 16 maximum concurrent
onstat -k
Locks
address wtlist owner lklist type tblsum rowid key#/bsiz
300b77d0 0 40074140 0 HDR+S 10002 106 0
300b7828 0 40074140 300b77d0 HDR+S 10197 122 0
300b7854 0 40074140 300b7828 HDR+IX 101e4 0 0
300b78d8 40075760 40074140 300b7854 HDR+X 101e4 100 0
300b7904 0 40075760 0 S 10002 106 0
300b7930 0 40075760 300b7904 S 10197 122 0
6 active, 5000 total, 8192 hash buckets
To find out the owner of the lock for which session ID 84 is waiting:
1. Obtain the address of the lock in the wait field (300b78d8) of the onstat -u output.
2. Find this address (300b78d8) in the Locks address field of the onstat -k output.
The owner field of this row in the onstat -k output contains the address of the user thread (40074140).
3. Find this address (40074140) in the Userthreads field of the onstat -u output.
The sessid field of this row in the onstat -u output contains the session ID (81) that owns the lock.
To eliminate the contention problem, you can have the user exit the application gracefully. If this solution is not possible, you
can stop the application process or remove the session with onmode -z.
Release the log files
The logical log contains a record of changes made to a database server instance. The logical-log records are used to roll
back transactions, recover from system failures, and so on. The following parameters affect logical logging.
Configuration
parameter
Description
DYNAMIC_LOGS
Determines whether the database server allocates new logical-log files
automatically. For more information, see Logical log.
LOGBUFF
Determines the amount of shared memory reserved for the buffers that hold
the logical-log records until they are flushed to disk. For information
about how to tune the logical-log buffer, see Logical-log buffer.
LOGFILES
Specifies the number of logical-log files used to store logical-log records
until they are backed up on disk. For more information, see Estimate the
size and number of log files.
LOGSIZE
Specifies the size of each logical-log file.
LTXHWM
Specifies the percentage of the available logical-logspace that, when
filled, triggers the database server to check for a long transaction. For
more information, see Set high-watermarks for rolling back long
transactions.
LTXEHWM
Specifies the point at which the long transaction being rolled back is
given exclusive access to the logical log.
TEMPTAB_NOLOG
Disables logging on temporary tables.
Senario: database server suspends all processing. If the database server attempts to switch to the next online logical-log
file but finds that the next log file in sequence is still in use, the database server immediately suspends all processing
(Processing stops to protect the data in log files).
All following criteria must be satisfied before the database server frees a logical-log file for reuse:
1. The log file is backed up.
2. The logical-log file does not contain the oldest update not yet flushed to disk.
3. No records within the logical-log file are associated with open transactions.
So, the first thing to do is to Backup log files when filled
$ ontape a
Secondary, The database server always forces a checkpoint when it switches to the last available log, if the previous
checkpoint record or oldest update that is not yet flushed to disk is located in the log that follows the last available log,
force a checkpoint:
$ onmode -c
switch the current log file to the next available log file manually:
$ onmode l
Add log files manually, if no free space in logic log dbspace, you have to add chucks to logic log dbspace manually first.
$ onparams
Tips: If you do not want to wait until the transactions complete, take the database server to quiescent mode, then all the
transactions will be roll back.
$ onmode -s
To quickly load a large, existing standard table
1. Drop indexes, referential constraints, and unique constraints.
2. Change the table to nonlogging.
The following sample SQL statement changes a STANDARD table to nonlogging:
ALTER TABLE largetab TYPE(RAW);
3. Load the table using a load utility such as dbexport or the High-Performance Loader (HPL).
For more information on dbexport and dbload, see the IBM Informix: Migration Guide. For more information on HPL, see
the IBM Informix: High-Performance Loader User's Guide.
4. Perform a level-0 backup of the nonlogging table.
You must make a level-0 backup of any nonlogging table that has been modified before you convert it to STANDARD type.
The level-0 backup provides a starting point from which to restore the data.
5. Change the nonlogging table to a logging table before you use it in a transaction.
The following sample SQL statement changes a raw table to a standard table:
ALTER TABLE largetab TYPE(STANDARD);
Warning:
It is recommended that you not use nonlogging tables within a transaction where multiple users can modify the data. If you
need to use a nonlogging table within a transaction, either set Repeatable Read isolation level or lock the table in exclusive
mode to prevent concurrency problems.
For more information on standard tables, see the previous section, Advantages of Logging Tables.
6. Re-create indexes, referential constraints, and unique constraints
Example:
[lchen@ifx01 /home/lchen/tools/db/informix/etc] $ cat loadTable.sql
-- CONNECT TO 'ip_systest@systestdb' USER 'informix' USING 'ifxdev'
CONNECT TO 'ip_0p@ipdb' USER 'informix' USING 'infxrmvb'
-- CONNECT TO 'sysadmin@systestdb';
-- SET CONSTRAINTS,INDEXES,TRIGGERS FOR hs_duty_rate DISABLED;
-- TRUNCATE hs_duty_rate;
-- DROP INDEX 58858_619251;
-- ALTER TABLE hs_duty_rate DROP CONSTRAINT u58858_619251;
-- ALTER TABLE hs_duty_rate TYPE(RAW);
-- LOAD FROM "/recyclebox/lchen/hs_duty_rate.20141004" INSERT INTO hs_duty_rate;
-- ALTER TABLE hs_duty_rate TYPE(STANDARD);
-- ALTER TABLE hs_duty_rate ADD CONSTRAINT primary key (hsno,hstarifftrtmnt,effdate);
-- CREATE UNIQUE INDEX ON hs_duty_rate (hsno,hstarifftrtmnt,effdate);
-- SET CONSTRAINTS,INDEXES,TRIGGERS FOR hs_duty_rate ENABLED;
-- DISCONNECT CURRENT;
To quickly load a new, large table
1. Create a nonlogging table in a logged database.
The following sample SQL statements creates a nonlogging table:
CREATE DATABASE history WITH LOG;
CONNECT TO DATABASE history;
CREATE RAW TABLE history (...
);
2. Load the table using a load utility such as dbexport or the High-Performance Loader (HPL).
For more information on dbexport and dbload, see the IBM Informix: Migration Guide. For more information on HPL, see
the IBM Informix: High-Performance Loader User's Guide.
3. Perform a level-0 backup of the nonlogging table.
You must make a level-0 backup of any nonlogging table that has been modified before you convert it to STANDARD type.
The level-0 backup provides a starting point from which to restore the data.
4. Change the nonlogging table to a logging table before you use it in a transaction.
The following sample SQL statement changes a raw table to a standard table:
ALTER TABLE largetab TYPE(STANDARD);
Warning:
It is recommended that you not use nonlogging tables within a transaction where multiple users can modify the data. If you
need to use a nonlogging table within a transaction, either set Repeatable Read isolation level or lock the table in exclusive
mode to prevent concurrency problems.
For more information on standard tables, see the previous section, Advantages of Logging Tables.
5. Create indexes on columns most often used in query filters.
6. Create any referential constraints and unique constraints, if needed.
What happens between a client and server when a TCP/IP connection is opened
When a TCP/IP connection is opened, the following information is read on the client side:
INFORMIXSERVER
hosts file information (INFORMIXSQLHOSTS, $INFORMIXDIR/etc/sqlhosts, the registry entry on Windows NT) and
services file information
Other environment variables
Resource files
The following information is read on the server side:
DBSERVERNAME
DBSERVERALIASES
Server environment variables and configuration parameters, including any NETTYPE configuration parameter setting
that manage TCP/IP connections
Strategy for estimating the size of the physical log
The size of the physical log depends on two factors: the rate at which transactions generate physical log activity and whether
you set the RTO_SERVER_RESTART configuration parameter
The rate at which transactions generate physical log activity can affect checkpoint performance. During checkpoint processing, if
the physical log starts getting too full as transactions continue to generate physical log data, the database server blocks
transactions to allow the checkpoint to complete and to avoid a physical log overflow.
To avoid transaction blocking, the database server must have enough physical log space to contain all of the transaction activity
that occurs during checkpoint processing. Checkpoints are triggered whenever the physical log becomes 75 percent full. When
the physical log becomes 75 percent full, checkpoint processing must complete before the remaining 25 percent of the physical
log is used. Transaction blocking occurs as soon as the system detects a potential for a physical log overflow, because every
active transaction might generate physical log activity.
For example, suppose you have a one gigabyte physical log and 1000 active transactions. 1000 active transactions have the
potential to generate approximately 80 megabytes of physical log activity if every transaction is in a critical section
simultaneously. When 750 megabytes of the physical log fills, the database server triggers a checkpoint. If the checkpoint has
not completed by the time the 920 megabytes of the physical log are used, transaction blocking occurs until the checkpoint
completes. If transaction blocking takes place, the server automatically triggers more frequent checkpoints to avoid transaction
blocking. You can disable the generation of automatic checkpoints.
The server might also trigger checkpoints if many dirty partitions exist, even if the physical log is not 75 percent full, because
flushing the modified partition data to disk requires physical log space. When the server checks if the Physical Log is 75 percent
full, the server also checks if the following condition is true:
(Physical Log Pages Used + Number of Dirty Partitions) >=
(Physical Log Size * 9) /10)
For more information about checkpoint processing and automatic checkpoints, see Checkpoints.
The second factor to consider when estimating the size of the physical log depends on your use of the
RTO_SERVER_RESTART configuration parameter to specify a target amount of time for fast recovery. If you are not required to
consider fast recovery time, you are not requires to enable the RTO_SERVER_RESTART configuration parameter. If you
specify a value for the RTO_SERVER_RESTART configuration parameter, transaction activity generates additional physical log
activity.
Typically, this additional physical log activity has little or no affect on transaction performance. The extra logging is used to assist
the buffer pool during fast recovery, so that log replay performs optimally. If the physical log is considerably larger than the
combined sizes of all buffer pools, page flushing and page faulting occur during fast recovery. The page flushing and page
faulting substantially reduce fast recovery performance, and the database server cannot maintain the
RTO_SERVER_RESTART policy.
For systems with less that four gigabytes of buffer pool space, the physical log can be sized at 110 percent of the combined size
of all the buffer pools. For larger buffer pools, start with four gigabytes of physical log space and then monitor checkpoint activity.
If checkpoints occur too frequently and seem to affect performance, increase the physical log size.
A rare condition, called a physical-log overflow, can occur when the database server is configured with a small physical log and
has many users. Following the previously described size guidelines helps avoid physical-log overflow. The database server
generates performance warnings to the message log whenever it detects suboptimal configurations.
You can use the onstat -g ckp command to display configuration recommendations if a suboptimal configuration is detected.
To change the size and location of the physical log, run the following command after you bring the database server to quiescent
or administration mode:
onparams -p -s size -d dbspace -y
size
The new size of the physical log in KB
dbspace
Specifies the dbspace where the physical log is to be located
The following example changes the size and location of the physical log. The new physical-log size is 400 KB, and the log
is located in the dbspace6 dbspace:
onparams -p -s 400 -d dbspace6 -y
The onstat -g rea command
Use the onstat -g rea option to monitor the number of threads in the ready queue. If the number of threads in the ready queue is
growing for a class of virtual processors (for example, the CPU class), you might be required to add more virtual processors to your
configuration.
[lchen@ifx01 /home/lchen] $ onstat -g rea
IBM Informix Dynamic Server Version 11.50.UC3W2 -- On-Line -- Up 255 days 13:43:43 -- 2051200 Kbytes
Ready threads:
tid tcb rstcb prty status vp-class name
The onstat -g ioq command
Use the onstat -g ioq option to determine whether you must allocate additional virtual processors. The command onstat -g ioq
displays the length and other statistics about I/O queues. If the length of the I/O queue is growing, I/O requests are accumulating faster
than the AIO virtual processors can process them. If the length of the I/O queue continues to show that I/O requests are accumulating,
consider adding AIO virtual processors.
Create Intermdiate data to hold WIPs recycled b3iid record on IPDEV Database
ip_arch
1. Create chunks , create dbspace on these new created chunks, create database ip_ardb using schema from
archive ip_arch05 on dbspace ip_ardb
$ touch /ix_dat2/ix_ardb.0
$ touch /ix_dat2/ix_ardb.0
$ touch /ix_dat2/ix_ardb.0
$ touch /ix_dat2/ix_ardb.0
$ touch /ix_dat2/ix_ardb.0
$ onspaces -c -d ip_ardb -p /ix_dat2/ix_ardb.0 -o 0 -s 1000000
$ onspaces -a ip_ardb -p /ix_dat2/ix_ardb.1 -o 0 -s 1000000
$ onspaces -a ip_ardb -p /ix_dat2/ix_ardb.2 -o 0 -s 1000000
$ onspaces -a ip_ardb -p /ix_dat2/ix_ardb.3 -o 0 -s 1000000
$ onspaces -a ip_ardb -p /ix_dat2/ix_ardb.4 -o 0 -s 1000000
To change dbspace name, disconnect all session and change dbserver’s status to quiescent mode,
after rename completed, set dbserver online again:
$ onmode -u
$ onspaces -ren ip_archdb -n iparchdbs
$ onmode -m
modify ip_arch05.sql,
CREATE DATABASE ip_arch IN iparchdbs WITH LOG;
CONNECT TO 'ip_arch@systestdb';
TIPS: there are remote database server/database definition in this sql script procedure, like
ip_0p@ipdb:informix.b3, it will take very long time to try to connect this database if this
database cannot be accessed due to network issues or any other definition errors in
sqlhosts/services files.
[lchen@ifx01 /home/lchen] $ echo "select count(*) from ip_arch@systestdb:informix.b3"|dbaccess
ip_0p
Database selected.
(count(*))
0
1 row(s) retrieved.
Database closed.
[lchen@ifx01 /home/lchen] $ echo "insert into ip_arch@systestdb:informix.usport_exit select * from
usport_exit"|dbaccess ip_0p
Database selected.
437 row(s) inserted.
Database closed.
[lchen@ifx01 /home/lchen] $ echo "insert into ip_arch@systestdb:informix.canct_off select * from
canct_off"|dbaccess ip_0p
Database selected.
323 row(s) inserted.
Database closed.
[lchen@ifx01 /home/lchen] $ echo "insert into ip_arch@systestdb:informix.ctry_code select * from
ctry_code"|dbaccess ip_0p
Database selected.
623 row(s) inserted.
Database closed.
[lchen@ifx01 /home/lchen] $ echo "insert into ip_arch@systestdb:informix.stringtable select * from
stringtable"|dbaccess ip_0p
Database selected.
39 row(s) inserted.
Database closed.
[lchen@ifx01 /home/lchen] $ echo "insert into ip_arch@systestdb:informix.transp_mode select * from
transp_mode"|dbaccess ip_0p
Database selected.
7 row(s) inserted.
Database closed.
Map File of Informix table fileds to Locus record
# This table maps the cci informix fields to the Locus record
#
# Column 1 - Table (CCIH or CCID)
# 2 - Informix field name ('default' means no corresponding field)
# 3 - Informix field length
# 4 - Locus offset
# 5 - Locus length
# 6 - Locus field type (A,N,D)
#
[lchen@ifx01 /usr/apps/dmq/src] $ cat locusmap.dat
ACCO CLIENT_INVOICE LIIClientNo 3 6 K LK 9(6)
ACCO CLIENT_INVOICE LIIAccountNo 9 3 K LK 9(3)
ACCO CLIENT_INVOICE LIIBrchNo 20 3 K LK 9(3)
ACCO CLIENT_INVOICE LIIRefNo 23 6 K LK 9(6)
ACCO CLIENT_INVOICE LIIRefText 29 3 K LK X(3)
ACCO CLIENT_INVOICE ItemStatus 504 3 N D X(1)
ACCO CLIENT_INVOICE ItemTypeCode 29 2 A D X(2)
ACCO CLIENT_INVOICE ItemDate 12 8 D D 9(8)
ACCO CLIENT_INVOICE TotDuty 377 13 C D 9(9).9(2)-
ACCO CLIENT_INVOICE TotAmt 92 13 C D 9(9).9(2)-
ACCO CLIENT_INVOICE Balance 105 13 C D 9(9).9(2)-
ACCO CLIENT_INVOICE TransactionNo 67 15 A D X(15)
ACCO CLIENT_INVOICE ItemFormCode 501 3 A D X(3)
ACCO CLIENT_INVOICE RecordLength 0 534 RL NA NA
ACCO CLIENT_INVOICE LocusKey 0 0 LK NA NA
ACCO CLIENT_INVOICE DebugCounters 0 0 DC NA NA
ACUS LII_CLIENT LIIClientNo 3 6 K LK 9(6)
ACUS LII_CLIENT LastPaymntDate 12 8 D D 9(8)
ACUS LII_CLIENT LastChequeAmt 20 13 C D 9(9).9(2)-
ACUS LII_CLIENT Terms 41 2 N D 9(2)
ACUS LII_CLIENT RecordLength 0 42 RL NA NA
ACUS LII_CLIENT LocusKey 0 0 LK NA NA
ACUS LII_CLIENT DebugCounters 0 0 DC NA NA
B2CL CLAIM_LOG ClaimLogIID 0 0 N SQL NA
B2CL CLAIM_LOG LIIClientNo 3 6 K K 9(6)
B2CL CLAIM_LOG B3TransNo 17 9 K K X(9)
B2CL CLAIM_LOG B3AcctSecurNo 12 5 K K X(5)
B2CL CLAIM_LOG B3TransSeqNo 26 2 N K 9(2)
B2CL CLAIM_LOG ClaimRefNo 28 14 A D X(14)
B2CL CLAIM_LOG B2BrchNo 120 3 N D 9(3)
B2CL CLAIM_LOG B2RefNo 123 6 N D 9(6)
B2CL CLAIM_LOG ClaimAmount 104 13 C D 9(9).9(2)-
B2CL CLAIM_LOG ClaimStatus 117 3 N D X(3)
B2CL CLAIM_LOG ClaimCode 67 2 A D X(2)
B2CL CLAIM_LOG CustomsDesn 66 1 A D X(1)
B2CL CLAIM_LOG ReceivedDate 129 8 D D 9(8)
B2CL CLAIM_LOG Submitdate 42 8 D D 9(8)
B2CL CLAIM_LOG StampedCopydate 50 8 D D 9(8)
B2CL CLAIM_LOG CustomsDesnDate 58 8 D D 9(8)
B2CL CLAIM_LOG ClaimVendorName 69 35 A D X(35)
B2CL CLAIM_LOG RecordLength 0 136 RL NA NA
B2CL CLAIM_LOG LocusKey 0 0 LK NA NA
B2CL CLAIM_LOG DebugCounters 0 0 DC NA NA
B2DA AS_ACCOUNTED AsAcctIID 0 0 N SQL NA
B2DA AS_ACCOUNTED ClaimLogIID 0 0 N LOO NA
B2DA AS_ACCOUNTED B2SubHdrNo 13 2 N K 9(2)
B2DA AS_ACCOUNTED B3LineNo 16 3 N K 9(3)
B2DA AS_ACCOUNTED B2LineNo 19 3 N K 9(3)
B2DA AS_ACCOUNTED HSNo 86 10 A D 9(10)
B2DA AS_ACCOUNTED B3Description 33 53 A D X(53)
B2DA AS_ACCOUNTED B2BrchNo 4 3 N SK 9(3)
B2DA AS_ACCOUNTED B2RefNo 7 6 N SK 9(6)
B2DA AS_ACCOUNTED RecordLength 0 96 RL NA NA
B2DA AS_ACCOUNTED LocusKey 0 0 LK NA NA
B2DA AS_ACCOUNTED DebugCounters 0 0 DC NA NA
B2DC AS_CLAIMED AsClaimedIID 0 0 N SQL NA
B2DC AS_CLAIMED ClaimLogIID 0 0 N LOO NA
B2DC AS_CLAIMED B2SubHdrNo 13 2 N K 9(2)
B2DC AS_CLAIMED B3LineNo 16 3 N K 9(3)
B2DC AS_CLAIMED B2LineNo 19 3 N K 9(3)
B2DC AS_CLAIMED HSNo 86 10 A D 9(10)
B2DC AS_CLAIMED B3Description 33 53 A D X(53)
B2DC AS_CLAIMED B2BrchNo 4 3 N SK 9(3)
B2DC AS_CLAIMED B2RefNo 7 6 N SK 9(3)
B2DC AS_CLAIMED RecordLength 0 96 RL NA NA
B2DC AS_CLAIMED LocusKey 0 0 LK NA NA
B2DC AS_CLAIMED DebugCounters 0 0 DC NA NA
B3BD B3B B3BIID 0 0 N SQL NA
B3BD B3B B3IID 0 0 N LOO NA
B3BD B3B CCDSeqNo 12 3 N D 9(3)
B3BD B3B CargCntrlNo 15 25 A D X(25)
B3BD B3B Quantity 40 8 N D 9(8)
B3BD B3B LIIBrchNo 3 3 N SK 9(3)
B3BD B3B LIIRefNo 6 6 N SK 9(3)
B3BD B3B RecordLength 0 61 RL NA NA
B3BD B3B LocusKey 0 0 LK NA NA
B3BD B3B DebugCounters 0 0 DC NA NA
B3BZ B3B B3Iid 0 0 N LOO NA
B3BZ B3B LiiBrchNo 3 3 N SK NA
B3BZ B3B LiiRefNo 6 6 N SK NA
B3BZ B3B RecordLength 0 61 RL NA NA
B3BZ B3B LocusKey 0 0 LK NA NA
B3BZ B3B DebugCounters 0 0 DC NA NA
B3EH B3 B3IID 0 0 N SQL NA
B3EH B3 LIIBrchNo 4 3 N SK 9(3)
B3EH B3 LIIRefNo 7 6 N SK 9(6)
B3EH B3 LIIClientNo 51 6 N D 9(6)
B3EH B3 LIIAccountNo 57 3 N D 9(3)
B3EH B3 AcctSecuNo 37 5 N D X(5)
B3EH B3 B3Type 23 2 A D X(2)
B3EH B3 CargoCntrlNo 101 25 A D X(25)
B3EH B3 CarrierCode 1142 4 A D X(4)
B3EH B3 CreateDate 25 12 DT4 D 9(12)
B3EH B3 CustOffc 134 3 LZ4/NZ D 9(3)
B3EH B3 K84Date 867 8 D D 9(8)
B3EH B3 ModeTransp 1129 1 NZ D 9(1)
B3EH B3 PortUnlading 1130 4 NZ D 9(4)
B3EH B3 RelDate 1070 12 DT4 D 9(12)
B3EH B3 Status 925 3 N D 9(3)
B3EH B3 TotB3Duty 485 13 C D 9(9).9(2)-
B3EH B3 TotB3ExcTax 511 13 C D 9(9).9(2)-
B3EH B3 TotB3GST 498 13 C D 9(9).9(2)-
B3EH B3 TotB3SIMA 1042 15 C D 9(11).9(2)-
B3EH B3 TotB3VFD 549 15 C D 9(11).9(2)-
B3EH B3 TransNo 42 9 N D X(9)
B3EH B3 Weight 1112 7 N D 9(7)
B3EH B3 PurchaseOrder1 211 15 A D X(15)
B3EH B3 PurchaseOrder2 226 15 A D X(15)
B3EH B3 ShipVia 163 18 A D X(18)
B3EH B3 LocationOfGoods 146 17 A D X(17)
B3EH B3 VendorName 76 25 A D X(25)
B3EH B3 VendorState 1099 3 A D X(3)
B3EH B3 VendorZip 1102 5 NZ D 9(5)
B3EH B3 Freight 1134 8 C D 9(8)
B3EH B3 USPortExit 1107 4 A D X(4)
B3EH B3 BillOfLading 181 10 A D X(10)
B3EH B3 CargCntrlQty 1091 8 N D 9(8)
B3EH B3 ApprovedDate 1307 8 D D 9(8)
B3EH B3 ContainerNo 191 20 A D X(20)
B3EH B3 SBRNNo 1292 15 A D X(15)
B3EH B3 CCNQty 1091 8 N D 9(8)
B3EH B3 CCINumLines 575 5 N D 9(5)
B3EH B3 InvoiceQty 126 8 N D 9(8)
B3EH B3 WarehouseNum 1038 3 N D 9(3)
B3EH B3 EntName 306 35 A D X(35)
B3EH B3 EntAddr1 341 35 A D X(35)
B3EH B3 EntAddr2 376 35 A D X(35)
B3EH B3 EntAddr3 411 35 A D X(35)
B3EH B3 EntAddr4 446 30 A D X(30)
B3EH B3 EntPostCd 476 9 A D X(9)
B3EH B3 StatusDate 928 12 DT4 D 9(12)
B3EH B3 RecordLength 0 1315 RL NA NA
B3EH B3 LocusKey 0 0 LK NA NA
B3EH B3 DebugCounters 0 0 DC NA NA
B3EH STATUS_HISTORY B3IID 0 0 N SQL NA
B3EH STATUS_HISTORY Status 925 3 N D 9(3)
B3EH STATUS_HISTORY StatusDate 928 12 DT4 D 9(12)
B3EH STATUS_HISTORY RecordLength 0 121 RL NA NA
B3EH STATUS_HISTORY LocusKey 0 0 LK NA NA
B3EH STATUS_HISTORY DebugCounters 0 0 DC NA NA
B3EH IP_RMD IPRMDIid 0 0 N SQL NA
B3EH IP_RMD AcctSecurNo 37 5 N D X(5)
B3EH IP_RMD TransNo 42 9 N D X(9)
B3EH IP_RMD ToSiteID 940 9 N D 9(9)
B3EH IP_RMD CargCntrlNo 101 25 A D X(25)
B3EH IP_RMD CargCntrlQty 1091 8 N D 9(8)
B3EH IP_RMD CarrierCode 1142 4 A D X(4)
B3EH IP_RMD CustOff 134 3 LZ4/NZ D 9(3)
B3EH IP_RMD VendorName 76 25 A D X(25)
B3EH IP_RMD PortUnlading 1130 4 NZ D 9(4)
B3EH IP_RMD PurchaseOrder1 211 15 A D X(15)
B3EH IP_RMD PurchaseOrder2 226 15 A D X(15)
B3EH IP_RMD RelDate 1070 12 DT4 D 9(12)
B3EH IP_RMD ShipVia 163 18 A D X(18)
B3EH IP_RMD Weight 1112 7 N D 9(7)
B3EH IP_RMD USPortExit 1107 4 A D X(4)
B3EH IP_RMD CreateDate 25 12 DT4 D 9(12)
B3EH IP_RMD LIIBrchNo 4 3 N D 9(3)
B3EH IP_RMD LIIRefNo 7 6 N D 9(6)
B3EH IP_RMD B3Type 23 2 A D X(2)
B3EH IP_RMD ModeTransp 1129 1 NZ D 9(1)
B3EH IP_RMD CtryOrigin 141 3 A D X(3)
B3EH IP_RMD PlaceExp 137 4 A D X(4)
B3EH IP_RMD ShipDate 1022 8 D D 9(8)
B3EH IP_RMD FromSiteId 0 0 N CON 240898001
B3EH IP_RMD IPStatus 0 0 A NA NA
B3EH IP_RMD RecordLength 0 1299 RL NA NA
B3EH IP_RMD LocusKey 0 0 LK NA NA
B3EH IP_RMD DebugCounters 0 0 DC NA NA
CARR CARRIER CarrierCode 5 4 A LK X(4)
CARR CARRIER Description 26 35 A D X(35)
CARR CARRIER RecordLength 0 510 RL NA NA
CARR CARRIER LocusKey 0 0 LK NA NA
CARR CARRIER DebugCounters 0 0 DC NA NA
CBBO BRANCH LIIBrchNo 7 3 K LK 9(3)
CBBO BRANCH Description 26 35 A D X(35)
CBBO BRANCH RecordLength 0 510 RL NA NA
CBBO BRANCH LocusKey 0 0 LK NA NA
CBBO BRANCH DebugCounters 0 0 DC NA NA
CBCN CTRY_CODE CtryCode 5 4 K LK X(4)
CBCN CTRY_CODE Description 26 30 A D X(30)
CBCN CTRY_CODE RecordLength 0 510 RL NA NA
CBCN CTRY_CODE LocusKey 0 0 LK NA NA
CBCN CTRY_CODE DebugCounters 0 0 DC NA NA
CBCO CANCT_OFF CanctOffCode 5 3 LZ4/K LK 9(3)
CBCO CANCT_OFF Description 26 35 A D X(35)
CBCO CANCT_OFF RecordLength 0 510 RL NA NA
CBCO CANCT_OFF LocusKey 0 0 LK NA NA
CBCO CANCT_OFF DebugCounters 0 0 DC NA NA
CBTT STRINGTABLE StrCode 5 2 A K 9(2)
CBTT STRINGTABLE Description 56 30 A D X(30)
CBTT STRINGTABLE StrType 0 3 A CON TRB
CBTT STRINGTABLE RecordLength 0 510 RL NA NA
CBTT STRINGTABLE LocusKey 0 0 LK NA NA
CBTT STRINGTABLE DebugCounters 0 0 DC NA NA
CCID IP_CCI_LINE CCILineIID 0 0 N SQL NA
CCID IP_CCI_LINE CCIIID 0 0 N LOO NA
CCID IP_CCI_LINE CCIPageNo 34 6 N D 9(6)
CCID IP_CCI_LINE CCILineNo 40 5 N D 9(5)
CCID IP_CCI_LINE CtryOrigin 480 3 A D X(3)
CCID IP_CCI_LINE CurrCode 78 3 A D X(3)
CCID IP_CCI_LINE PartDesc 107 58 A D X(58)
CCID IP_CCI_LINE DiscntTypeDesc 392 40 A D X(40)
CCID IP_CCI_LINE HSNo 519 10 A D X(10)
CCID IP_CCI_LINE ItemDiscnt 432 8 CN/P2 D 9(5).9(2)
CCID IP_CCI_LINE PartKeywrd 82 25 A D X(25)
CCID IP_CCI_LINE Quantity 353 14 P2 D 9(11).9(2)
CCID IP_CCI_LINE RevTotVal 441 14 CN/P4 D 9(9).9(4)
CCID IP_CCI_LINE UnitMeas 367 5 A D X(5)
CCID IP_CCI_LINE UnitPrice 339 14 P4 D 9(9).9(4)
CCID IP_CCI_LINE NoPacks 529 2 CN/N D 9(2)
CCID IP_CCI_LINE RecordLength 0 550 RL NA NA
CCID IP_CCI_LINE LocusKey 0 0 LK NA NA
CCID IP_CCI_LINE DebugCounters 0 0 DC NA NA
CLAC LII_ACCOUNT LIIClientNo 4 6 N K 9(6)
CLAC LII_ACCOUNT LIIAccountNo 10 3 N K 9(3)
CLAC LII_ACCOUNT Name 23 35 A D X(35)
CLAC LII_ACCOUNT SiteID 14 9 N D 9(9)
CLAC LII_ACCOUNT PartnerBFlag 13 1 A D X(1)
CLAC LII_ACCOUNT RecordLength 0 122 RL NA NA
CLAC LII_ACCOUNT LocusKey 0 0 LK NA NA
CLAC LII_ACCOUNT DebugCounters 0 0 DC NA NA
CLCO ACCOUNT_CONTACT AcctContIID 0 0 NA SQL NA
CLCO ACCOUNT_CONTACT EmployeeNo 10 5 N D 9(5)
CLCO ACCOUNT_CONTACT LIIClientNo 1 6 N D 9(6)
CLCO ACCOUNT_CONTACT LIIAccountNo 7 3 N D 9(3)
CLCO ACCOUNT_CONTACT RecordLength 0 44 RL NA NA
CLCO ACCOUNT_CONTACT LocusKey 0 0 LK NA NA
CLCO ACCOUNT_CONTACT DebugCounters 0 0 DC NA NA
CLIE LII_CLIENT LIIClientNo 4 6 N K 9(6)
CLIE LII_CLIENT Name 23 35 A D X(35)
CLIE LII_CLIENT SiteID 14 9 N D 9(9)
CLIE LII_CLIENT PartnerBFlag 13 1 A D X(1)
CLIE LII_CLIENT LastPaymntDate 0 0 D D NA
CLIE LII_CLIENT LastChequeAmt 0 0 N D NA
CLIE LII_CLIENT Terms 0 0 N D NA
CLIE LII_CLIENT RecordLength 0 122 RL NA NA
CLIE LII_CLIENT LocusKey 0 0 LK NA NA
CLIE LII_CLIENT DebugCounters 0 0 DC NA NA
EMPL LII_CONTACT EmployeeNo 1 5 N K 9(5)
EMPL LII_CONTACT ContactCode 41 3 A D 9(3)
EMPL LII_CONTACT LastName 6 20 A D X(20)
EMPL LII_CONTACT FirstName 26 15 A D X(15)
EMPL LII_CONTACT Location 44 25 A D X(25)
EMPL LII_CONTACT PhoneNo 69 10 A D 9(10)
EMPL LII_CONTACT PhoneExt 79 4 NZ D 9(4)
EMPL LII_CONTACT FaxNo 83 10 NZ D 9(10)
EMPL LII_CONTACT InActiveFlag 93 1 A D X(1)
EMPL LII_CONTACT RecordLength 0 122 RL NA NA
EMPL LII_CONTACT LocusKey 0 0 LK NA NA
EMPL LII_CONTACT DebugCounters 0 0 DC NA NA
EMTI CONTACT_TYPE ContactType 1 3 N K 9(3)
EMTI CONTACT_TYPE Description 4 25 A D X(25)
EMTI CONTACT_TYPE RecordLength 0 61 RL NA NA
EMTI CONTACT_TYPE LocusKey 0 0 LK NA NA
EMTI CONTACT_TYPE DebugCounters 0 0 DC NA NA
CCIH IP_CCI CCIIID 7 25 N SQL NA
CCIH IP_CCI CCIIID 51 25 N DUP NA
CCIH IP_CCI ToSiteId 1 6 N D 9(6)
CCIH IP_CCI RefNo 2344 9 N D 9(9)
CCIH IP_CCI CommerInvNo 2287 20 A D X(20)
CCIH IP_CCI CondSale 3993 35 A D X(35)
CCIH IP_CCI CostNotIncl 2267 9 CN/P2 D 9(6).9(2)
CCIH IP_CCI DeptRulingDate 2011 15 A D X(15)
CCIH IP_CCI DeptRulingNo 2091 20 A D X(20)
CCIH IP_CCI EntryTransShip 434 4 A D X(4)
CCIH IP_CCI ExpNotIncl 2276 9 CN/P2 D 9(6).9(2)
CCIH IP_CCI InclCost 2240 9 CN/P2 D 9(6).9(2)
CCIH IP_CCI InclExp 2249 9 CN/P2 D 9(6).9(2)
CCIH IP_CCI InclTrans 2231 9 CN/P2 D 9(6).9(2)
CCIH IP_CCI InvTot 3816 12 P2 D 9(9).9(2)
CCIH IP_CCI OtherCCIRef 441 35 A D X(35)
CCIH IP_CCI OtherNotes 2307 75 A D X(75)
CCIH IP_CCI PurchOrderNo 3840 20 A D X(20)
CCIH IP_CCI PurchOrderRef 3860 20 A D X(20)
CCIH IP_CCI PurchSupply 2286 1 A D X(1)
CCIH IP_CCI RoyaltyProceeds 2285 1 A D X(1)
CCIH IP_CCI ShipDate 566 8 A D 9(8)
CCIH IP_CCI TermsPaymnt 2151 35 A D X(35)
CCIH IP_CCI TranspNotIncl 2258 9 CN/P2 D 9(6).9(2)
CCIH IP_CCI UnitMeasNet 2126 5 A D X(5)
CCIH IP_CCI WayBill 546 20 A D X(20)
CCIH IP_CCI WeightGross 2141 10 P2 D 9(7).9(2)
CCIH IP_CCI WeightNet 2131 10 P2 D 9(7).9(2)
CCIH IP_CCI ConsigneeName 574 36 A D X(36)
CCIH IP_CCI ConsigneeAddress1 610 36 A D X(36)
CCIH IP_CCI ConsigneeAddress2 646 36 A D X(36)
CCIH IP_CCI ConsigneeAddress3 682 36 A D X(36)
CCIH IP_CCI ConsigneeCity 718 36 A D X(36)
CCIH IP_CCI ConsigneeProvince 754 36 A D X(36)
CCIH IP_CCI ConsigneeCountry 790 36 A D X(36)
CCIH IP_CCI ConsigneePostCode 826 36 A D X(36)
CCIH IP_CCI CurrCodeDesc 2221 10 A D X(10)
CCIH IP_CCI DirectShipLocation 3959 25 A D X(25)
CCIH IP_CCI ExporterName 1443 36 A D X(36)
CCIH IP_CCI ExporterAddress1 1479 36 A D X(36)
CCIH IP_CCI ExporterAddress2 1515 36 A D X(36)
CCIH IP_CCI ExporterAddress3 1551 36 A D X(36)
CCIH IP_CCI ExporterCity 1587 36 A D X(36)
CCIH IP_CCI ExportProvince 1623 36 A D X(36)
CCIH IP_CCI ExporterCountry 1659 36 A D X(36)
CCIH IP_CCI ExporterPostCode 1695 36 A D X(36)
CCIH IP_CCI OriginatorName 1767 36 A D X(36)
CCIH IP_CCI OriginatorAddress1 1803 36 A D X(36)
CCIH IP_CCI OriginatorAddress2 1839 36 A D X(36)
CCIH IP_CCI OriginatorAddress3 1875 36 A D X(36)
CCIH IP_CCI OriginatorCity 1911 36 A D X(36)
CCIH IP_CCI OriginatorProvince 1947 36 A D X(36)
CCIH IP_CCI OriginatorCountry 1983 36 A D X(36)
CCIH IP_CCI OriginatorPostCode 2019 36 A D X(36)
CCIH IP_CCI PurchaserName 1119 36 A D X(36)
CCIH IP_CCI PurchaserAddress1 1155 36 A D X(36)
CCIH IP_CCI PurchaserAddress2 1191 36 A D X(36)
CCIH IP_CCI PurchaserAddress3 1227 36 A D X(36)
CCIH IP_CCI PurchaserCity 1263 36 A D X(36)
CCIH IP_CCI PurchaserProvince 1299 36 A D X(36)
CCIH IP_CCI PurchaserCountry 1335 36 A D X(36)
CCIH IP_CCI PurchaserPostCode 1371 36 A D X(36)
CCIH IP_CCI VendName 90 36 A D X(36)
CCIH IP_CCI VendorAddress1 126 36 A D X(36)
CCIH IP_CCI VendorAddress2 162 36 A D X(36)
CCIH IP_CCI VendorAddress3 198 36 A D X(36)
CCIH IP_CCI VendorCity 234 36 A D X(36)
CCIH IP_CCI VendorProvince 270 36 A D X(36)
CCIH IP_CCI VendorCountry 306 36 A D X(36)
CCIH IP_CCI VendorPostCode 342 36 A D X(36)
CCIH IP_CCI VendorStateCode 414 3 A D X(3)
CCIH IP_CCI VendorZipCode 417 5 N D 9(5)
CCIH IP_CCI IPStatus 4040 1 A D NA
CCIH IP_CCI FromSiteId 2332 9 N D NA
CCIH IP_CCI CCIExpenseFlag 0 0 A D NA
CCIH IP_CCI CommerInvFlag 0 0 A D NA
CCIH IP_CCI CreateDate 0 0 D D NA
CCIH IP_CCI CreateUserID 0 0 N D NA
CCIH IP_CCI CurrCode 0 0 N D NA
CCIH IP_CCI DirectshipLoc 0 0 N D NA
CCIH IP_CCI DiscntType 0 0 N D NA
CCIH IP_CCI Discount 0 0 CN/F D NA
CCIH IP_CCI InvTotB4Discnt 0 0 F D NA
CCIH IP_CCI ModeDate 0 0 D D NA
CCIH IP_CCI ModeTransp 0 0 N D NA
CCIH IP_CCI ModUserID 0 0 N D NA
CCIH IP_CCI NRIBroker 0 0 CN/N D NA
CCIH IP_CCI NRIDuty 0 0 CN/N D NA
CCIH IP_CCI NRITax 0 0 CN/N D NA
CCIH IP_CCI NRIInclPmntFlg 0 0 A D NA
CCIH IP_CCI TransShipFlag 0 0 A D NA
CCIH IP_CCI UnitMeasGross 0 0 N D NA
CCIH IP_CCI RecordLength 0 4040 RL NA NA
CCIH IP_CCI LocusKey 0 0 LK NA NA
CCIH IP_CCI DebugCounters 0 0 DC NA NA
PB3B IP_B3B IPB3BIID 0 0 N SQL NA
PB3B IP_B3B IPRMDIID 0 0 N PRV NA
PB3B IP_B3B CargCntrlNo 15 25 A D X(25)
PB3B IP_B3B Quantity 40 8 F D 9(8)
PB3B IP_B3B RecordLength 0 61 RL NA NA
PB3B IP_B3B LocusKey 0 0 LK NA NA
PB3B IP_B3B DebugCounters 0 0 DC NA NA
PORE USPORT_EXIT PortExit 43 4 K LK 9(4)
PORE USPORT_EXIT Description 1 40 A D X(40)
PORE USPORT_EXIT RecordLength 0 70 RL NA NA
PORE USPORT_EXIT LocusKey 0 0 LK NA NA
PORE USPORT_EXIT DebugCounters 0 0 DC NA NA
RECD B3_RECAP_DETAIL B3RecapDetIID 0 0 NA SQL NA
RECD B3_RECAP_DETAIL B3LineIID 0 0 NA PRV NA
RECD B3_RECAP_DETAIL CCIPageNo 449 4 N D 9(4)
RECD B3_RECAP_DETAIL CCILineNo 454 3 N D 9(3)
RECD B3_RECAP_DETAIL ProductDesc 464 25 N D X(25)
RECD B3_RECAP_DETAIL UintMeas 642 3 A D X(3)
RECD B3_RECAP_DETAIL UnitMeasQty 645 11 N D 9(7).9(3)
RECD B3_RECAP_DETAIL Amount 607 14 C D 9(11).9(2)
RECD B3_RECAP_DETAIL PercentSplit 1538 6 C D 9(3).9(2)
RECD B3_RECAP_DETAIL DetailPONumber 1610 15 A D X(15)
RECD B3_RECAP_DETAIL UnitPrice 1574 14 C D 9(9).9(4)
RECD B3_RECAP_DETAIL RecordLength 0 70 RL NA NA
RECD B3_RECAP_DETAIL LocusKey 0 0 LK NA NA
RECD B3_RECAP_DETAIL DebugCounters 0 0 DC NA NA
RECM B3_LINE_COMMENT B3LineCommentIID 0 0 NA SQL NA
RECM B3_LINE_COMMENT B3LineIID 0 0 NA PRV NA
RECM B3_LINE_COMMENT Comment1 549 58 A D X(58)
RECM B3_LINE_COMMENT Comment2 491 58 A D X(58)
RECM B3_LINE_COMMENT RecordLength 0 70 RL NA NA
RECM B3_LINE_COMMENT LocusKey 0 0 LK NA NA
RECM B3_LINE_COMMENT DebugCounters 0 0 DC NA NA
RECP B3_SUBHEADER B3SubIID 0 0 NA SQL Retrieve from B3_SUBHDR_IID table and
increment
RECP B3_SUBHEADER B3IID 0 0 NA LOO N/A - FK must be retrieved from B3 before
inserting sub header
RECP B3_SUBHEADER B3SubNo 12 3 N D 9(3)
RECP B3_SUBHEADER CtryOrigin 93 3 A D X(3)
RECP B3_SUBHEADER CurrCode 110 3 A D X(3)
RECP B3_SUBHEADER PlaceExp 96 4 A D X(4)
RECP B3_SUBHEADER ShipDate 102 8 D D 9(8)
RECP B3_SUBHEADER TariffTrtmnt 100 2 A D 9(2)
RECP B3_SUBHEADER TimeLim 113 2 N D 9(2)
RECP B3_SUBHEADER TimeLimUnit 115 1 A D X(1)
RECP B3_SUBHEADER VendorName 60 25 A D X(25)
RECP B3_SUBHEADER VendorState 85 3 A D X(3)
RECP B3_SUBHEADER VendorZip 88 5 NZ D 9(5)
RECP B3_SUBHEADER LIIBrchNo 3 3 N SK 9(3)
RECP B3_SUBHEADER LIIRefNo 6 6 N SK 9(6)
RECP B3_SUBHEADER RecordLength 0 1700 RL NA NA
RECP B3_SUBHEADER LocusKey 0 0 LK NA NA
RECP B3_SUBHEADER DebugCounters 0 0 DC NA NA
RECQ B3_LINE B3LineIID 0 0 NA SQL NA
RECQ B3_LINE B3SubIID 0 0 NA PRV NA
RECQ B3_LINE B3LineNo 1699 4 N D 9(3)
RECQ B3_LINE AdValDutyRateUMeas 642 3 A D X(3)
RECQ B3_LINE AdValRate1 116 6 N D 9(3).9(2)
RECQ B3_LINE ConvToQty1 645 11 N D 9(7).9(3)
RECQ B3_LINE ConvToQty2 659 11 N D 9(7).9(3)
RECQ B3_LINE ConvToQty3 673 11 N D 9(7).9(3)
RECQ B3_LINE ExcDuty 1514 12 N D 9(9).9(2)
RECQ B3_LINE ExcDutyRateUMeas 670 3 A D X(3)
RECQ B3_LINE ExcDutyRate 150 10 N D 9(3).9(6)
RECQ B3_LINE ExchgRate 621 9 N D 9(2).9(6)
RECQ B3_LINE ExcTax 1478 12 N D 9(9).9(2)
RECQ B3_LINE ExcTaxRateUMeas 279 3 A D X(3)
RECQ B3_LINE ExcTaxRate 263 6 N D 9(3).9(2)
RECQ B3_LINE ExcTaxExmptCode 349 2 A D X(2)
RECQ B3_LINE GST 1466 12 N D 9(9).9(2)
RECQ B3_LINE GSTRate 459 5 N D 9(2).9(2)
RECQ B3_LINE HSNo 35 10 A D 9(10)
RECQ B3_LINE OICSpecialAut 229 16 A D X(16)
RECQ B3_LINE PartKeywrd 464 25 A D X(25)
RECQ B3_LINE PartSufx 489 2 N D 9(2)
RECQ B3_LINE PartDesc 549 58 A D X(58)
RECQ B3_LINE SIMACode 630 2 NZ D 9(2)
RECQ B3_LINE SIMAVal 632 10 N D 9(7).9(2)
RECQ B3_LINE SpcDutyRateUMeas 656 3 A D X(3)
RECQ B3_LINE SpcRate 163 10 N D 9(3).9(6)
RECQ B3_LINE TariffCode 45 4 NZ D 9(4)
RECQ B3_LINE VFCC 1321 12 N D 9(9).9(2)
RECQ B3_LINE VFD 1430 12 N D 9(9).9(2)
RECQ B3_LINE VFDCode 291 2 A D 9(2)
RECQ B3_LINE VFT 1454 12 N D 9(9).9(2)
RECQ B3_LINE LineComment 491 58 A D X(58)
RECQ B3_LINE AdValDuty 1357 12 N D 9(9).9(2)
RECQ B3_LINE SpcDuty 1369 12 N D 9(9).9(2)
RECQ B3_LINE TotalDuty 1442 12 N D 9(9).9(2)
RECQ B3_LINE GSTExemptCode 345 2 NZ D 9(2)
RECQ B3_LINE RulingNumber 176 45 A D X(45)
RECQ B3_LINE TRQNo 351 9 N D 9(9)
RECQ B3_LINE PrevTransNo 791 14 A D X(14)
RECQ B3_LINE PrevLineNo 805 4 N D 9(4)
RECQ B3_LINE RecordLength 0 1750 RL NA NA
RECQ B3_LINE LocusKey 0 0 LK NA NA
RECQ B3_LINE DebugCounters 0 0 DC NA NA
TADU HS_DUTY_RATE HSNo 1 10 A AK 9(8)
TADU HS_DUTY_RATE HStariffTrtmnt 11 2 A AK X(2)
TADU HS_DUTY_RATE EffDate 13 8 D AK 9(8)
TADU HS_DUTY_RATE ExpryDate 21 8 D D 9(8)
TADU HS_DUTY_RATE AdValRate 29 6 F2 D 9(3).9(2)
TADU HS_DUTY_RATE MinAmtType 35 1 A D X(1)
TADU HS_DUTY_RATE MaxAmtType 36 1 A D X(1)
TADU HS_DUTY_RATE MinAmt 37 10 F6 D 9(3).9(6)
TADU HS_DUTY_RATE MaxAmt 47 10 F6 D 9(3).9(6)
TADU HS_DUTY_RATE MinAmtUnitMeas 57 3 A D X(3)
TADU HS_DUTY_RATE MaxAmtUnitMeas 60 3 A D X(3)
TADU HS_DUTY_RATE ExcRate 63 10 F6 D 9(3).9(6)
TADU HS_DUTY_RATE ExcUnitMeas 73 3 A D X(3)
TADU HS_DUTY_RATE SpecRate 76 10 F6 D 9(3).9(6)
TADU HS_DUTY_RATE SpecUnitMeas 86 3 A D X(3)
TADU HS_DUTY_RATE RecordLength 0 121 RL NA NA
TADU HS_DUTY_RATE LocusKey 0 0 LK NA NA
TADU HS_DUTY_RATE DebugCounters 0 0 DC NA NA
TANX TARIFF_CODE TariffCode 1 4 A AK 9(4)
TANX TARIFF_CODE HSTariffTrtmnt 5 2 A AK 9(2)
TANX TARIFF_CODE EffDate 7 8 D AK 9(8)
TANX TARIFF_CODE AdValRate 15 6 F D 9(3).9(2)
TANX TARIFF_CODE MinAmtType 21 1 A D X(1)
TANX TARIFF_CODE MaxAmttype 22 1 A D X(1)
TANX TARIFF_CODE MinAmt 23 10 F D 9(3).9(6)
TANX TARIFF_CODE MaxAmt 33 10 F D 9(3).9(6)
TANX TARIFF_CODE MinAmtUnitMeas 43 3 A D X(3)
TANX TARIFF_CODE MaxAmtUnitMeas 46 3 A D X(3)
TANX TARIFF_CODE SpecRate 49 10 F D 9(3).9(6)
TANX TARIFF_CODE SpecUnitMeas 59 3 A D X(3)
TANX TARIFF_CODE ExpryDate 73 8 D D 9(8)
TANX TARIFF_CODE CreateDate 0 0 D D 9(8)
TANX TARIFF_CODE RecordLength 0 128 RL NA NA
TANX TARIFF_CODE LocusKey 0 0 LK NA NA
TANX TARIFF_CODE DebugCounters 0 0 DC NA NA
TARF TARIFF LIIClientNo 5 6 N K 9(6)
TARF TARIFF VendorName 11 25 A K X(25)
TARF TARIFF ProductKeyword 36 25 A K X(25)
TARF TARIFF ProductSufx 61 2 N K 9(2)
TARF TARIFF ApprovalCode 253 1 A D X(1)
TARF TARIFF B3Description 254 58 A D X(58)
TARF TARIFF B3RefBrch 350 3 N D 9(3)
TARF TARIFF B3RefNo 353 6 N D 9(6)
TARF TARIFF CreateDate 479 8 D D 9(8)
TARF TARIFF COOIndicator 433 1 A D X(1)
TARF TARIFF COOExpryDate 434 8 D D 9(8)
TARF TARIFF ExcTaxLicInd 490 1 A D X(1)
TARF TARIFF GSTExemptCode 487 2 NZ D 9(2)
TARF TARIFF GSTRateCode 489 2 A D 9(2)
TARF TARIFF HSNo 359 10 A D 9(10)
TARF TARIFF LastUsedDate 466 8 D D 9(8)
TARF TARIFF ModDate 402 8 D D 9(8)
TARF TARIFF ModUser 507 12 A D X(12)
TARF TARIFF OIC 130 16 A D X(16)
TARF TARIFF OICExpryDate 450 8 D D 9(8)
TARF TARIFF PercentSplit 373 6 N D 9(3).9(2)
TARF TARIFF PlaceExp 63 4 A D X(4)
TARF TARIFF RemissNo 410 7 NZ D 9(7)
TARF TARIFF RemissExpryDate 417 8 D D 9(8)
TARF TARIFF RulingNo 69 45 A D X(45)
TARF TARIFF RulingExpryDate 114 8 D D 9(8)
TARF TARIFF SpecialInstruct 312 30 A D X(30)
TARF TARIFF Remarks 146 58 A D X(58)
TARF TARIFF TariffCode 369 4 NZ D 9(4)
TARF TARIFF TariffTrtmnt 67 2 A D 9(2)
TARF TARIFF VFDCode 204 2 A D 9(2)
TARF TARIFF ExcTaxRate 237 5 N D 9(2).9(2)
TARF TARIFF ExcTaxAmt 224 10 N D 9(3).9(6)
TARF TARIFF ExcTaxUnit 234 3 A D X(3)
TARF TARIFF ExcTaxDeduct 242 6 N D 9(3).9(2)
TARF TARIFF ExcTaxDeductUnit 248 3 A D X(3)
TARF TARIFF ExcTaxExmptCode 491 2 A D X(2)
TARF TARIFF ProjectCode 379 5 A D X(5)
TARF TARIFF BusinessUnitCode 384 5 A D X(5)
TARF TARIFF MaterialClassCode 524 3 A D X(3)
TARF TARIFF CountryOrigin 519 4 A D X(4)
TARF TARIFF RequirementID 529 8 A D X(8)
TARF TARIFF Version 537 4 A D X(4)
TARF TARIFF OGDExtension 541 6 A D X(6)
TARF TARIFF EndUse 547 3 A D X(3)
TARF TARIFF Miscellaneous 550 3 A D X(3)
TARF TARIFF RegType01 553 3 A D X(3)
TARF TARIFF RecordLength 0 520 RL NA NA
TARF TARIFF LocusKey 0 0 LK NA NA
TARF TARIFF DebugCounters 0 0 DC NA NA
TAUM HS_UOM HSNo 1 10 A AK X(10)
TAUM HS_UOM EffDate 11 8 D AK 9(8)
TAUM HS_UOM UnitMeas 21 3 A D X(3)
TAUM HS_UOM ExpryDate 24 8 D D 9(8)
TAUM HS_UOM RecordLength 0 121 RL NA NA
TAUM HS_UOM LocusKey 0 0 LK NA NA
TAUM HS_UOM DebugCounters 0 0 DC NA NA
IPDEV informix configuration:
instance name: systestdb
database name: ip_systest@systestdb
root@ifx01:/># onstat -d
IBM Informix Dynamic Server Version 11.50.UC3W2 -- On-Line -- Up 6 days 00:20:43 -- 2051200
Kbytes
Dbspaces
address number flags fchunk nchunks pgsize flags owner name
90431810 1 0x1 1 1 4096 N informix rootdbs
913ff3e0 2 0x1 2 1 4096 N informix llogdbs
913ff540 3 0x1 3 1 4096 N informix plogdbs
913ff6a0 4 0x1 4 23 4096 N informix datadbs1
913ff800 5 0x1 19 25 4096 N informix datadbs2
913ff960 6 0x1 34 7 4096 N informix indxdbs1
913ffac0 7 0x1 37 5 4096 N informix indxdbs2
913ffc20 8 0x2001 40 1 4096 N T informix tempdbs1
913ffd80 9 0x2001 41 1 4096 N T informix tempdbs2
91400018 10 0x2001 42 1 4096 N T informix tempdbs3
91400178 11 0x1 59 20 4096 N informix datadbs3
914002d8 12 0x1 75 4 4096 N informix indxdbs3
91400438 13 0x2001 78 1 4096 N T informix tempdbs4
13 active, 2047 maximum
Chunks
address chunk/dbs offset size free bpages flags pathname
90431970 1 1 0 55000 12509 PO-- /ix_root/ix_root.1
91400598 2 2 0 250000 69947 PO-- /ix_llog/ix_llog.1
91400768 3 3 0 64000 1447 PO-- /ix_plog/ix_plog.1
91400938 4 4 0 250000 7 PO-- /ix_dat1/ix_dat1.1
91400b08 5 4 0 250000 1220 PO-- /ix_dat1/ix_dat1.2
91400cd8 6 4 0 250000 144 PO-- /ix_dat1/ix_dat1.3
91401018 7 4 0 250000 2312 PO-- /ix_dat1/ix_dat1.4
914011e8 8 4 0 250000 0 PO-- /ix_dat1/ix_dat1.5
914013b8 9 4 0 250000 0 PO-- /ix_dat1/ix_dat1.6
91401588 10 4 0 250000 0 PO-- /ix_dat1/ix_dat1.7
91401758 11 4 0 250000 0 PO-- /ix_dat1/ix_dat1.8
91401928 12 4 0 250000 323 PO-- /ix_dat1/ix_dat1.9
91401af8 13 4 0 250000 1800 PO-- /ix_dat1/ix_dat1.10
91401cc8 14 4 0 250000 72 PO-- /ix_dat1/ix_dat1.11
91402018 15 4 0 250000 688 PO-- /ix_dat1/ix_dat1.12
914021e8 16 4 0 250000 1208 PO-- /ix_dat1/ix_dat1.13
914023b8 17 4 0 250000 0 PO-- /ix_dat1/ix_dat1.14
91402588 18 4 0 250000 0 PO-- /ix_dat1/ix_dat1.15
91402758 19 5 0 250000 1 PO-- /ix_dat2/ix_dat2.1
91402928 20 5 0 250000 733 PO-- /ix_dat2/ix_dat2.2
91402af8 21 5 0 250000 5 PO-- /ix_dat2/ix_dat2.3
91402cc8 22 5 0 250000 5 PO-- /ix_dat2/ix_dat2.4
91403018 23 5 0 250000 5 PO-- /ix_dat2/ix_dat2.5
914031e8 24 5 0 250000 133 PO-- /ix_dat2/ix_dat2.6
914033b8 25 5 0 250000 133 PO-- /ix_dat2/ix_dat2.7
91403588 26 5 0 250000 133 PO-- /ix_dat2/ix_dat2.8
91403758 27 5 0 250000 645 PO-- /ix_dat2/ix_dat2.9
91403928 28 5 0 250000 5 PO-- /ix_dat2/ix_dat2.10
91403af8 29 5 0 250000 37125 PO-- /ix_dat2/ix_dat2.11
91403cc8 30 5 0 250000 51205 PO-- /ix_dat2/ix_dat2.12
91404018 31 5 0 250000 9221 PO-- /ix_dat2/ix_dat2.13
914041e8 32 5 0 250000 59397 PO-- /ix_dat2/ix_dat2.14
914043b8 33 5 0 250000 245765 PO-- /ix_dat2/ix_dat2.15
91404588 34 6 0 250000 1 PO-- /ix_idx1/ix_idx1.1
91404758 35 6 0 250000 6 PO-- /ix_idx1/ix_idx1.2
91404928 36 6 0 250000 3 PO-- /ix_idx1/ix_idx1.3
91404af8 37 7 0 250000 3 PO-- /ix_idx2/ix_idx2.1
91404cc8 38 7 0 250000 1 PO-- /ix_idx2/ix_idx2.2
91405018 39 7 0 250000 5 PO-- /ix_idx2/ix_idx2.3
914051e8 40 8 0 250000 249947 PO-- /ix_temp/ix_temp.1
914053b8 41 9 0 250000 249947 PO-- /ix_temp/ix_temp.2
91405588 42 10 0 250000 249947 PO-- /ix_temp/ix_temp.3
91405758 43 7 0 250000 67613 PO-- /ix_idx2/ix_idx2.4
91405928 44 5 0 250000 243717 PO-- /ix_dat2/ix_dat2.16
91405af8 45 6 0 250000 13 PO-- /ix_idx1/ix_idx1.4
91405cc8 46 5 0 250000 249949 PO-- /ix_dat2/ix_dat2.17
91406018 47 5 0 250000 229517 PO-- /ix_dat2/ix_dat2.18
914061e8 48 6 0 250000 217229 PO-- /ix_idx1/ix_idx1.5
914063b8 49 7 0 250000 249997 PO-- /ix_idx2/ix_idx2.5
91406588 50 5 0 250000 233613 PO-- /ix_dat2/ix_dat2.19
91406758 51 6 0 250000 249997 PO-- /ix_idx1/ix_idx1.6
91406928 52 5 0 250000 249997 PO-- /ix_dat2/ix_dat2.20
91406af8 53 5 0 250000 247437 PO-- /ix_dat2/ix_dat2.21
91406cc8 54 5 0 250000 249997 PO-- /ix_dat2/ix_dat2.22
91407018 55 5 0 250000 249997 PO-- /ix_dat2/ix_dat2.23
914071e8 56 6 0 250000 249997 PO-- /ix_idx1/ix_idx1.7
914073b8 57 5 0 250000 249997 PO-- /ix_dat2/ix_dat2.24
91407588 58 5 0 250000 217229 PO-- /ix_dat2/ix_dat2.25
91407758 59 11 0 250000 3 PO-- /ix_dat3/ix_dat3.1
91407928 60 11 0 250000 1 PO-- /ix_dat3/ix_dat3.2
91407af8 61 11 0 250000 1 PO-- /ix_dat3/ix_dat3.3
91407cc8 62 11 0 250000 1 PO-- /ix_dat3/ix_dat3.4
9140b018 63 11 0 250000 1 PO-- /ix_dat3/ix_dat3.5
9140b1e8 64 11 0 250000 1 PO-- /ix_dat3/ix_dat3.6
9140b3b8 65 11 0 250000 1 PO-- /ix_dat3/ix_dat3.7
9140b588 66 11 0 250000 1 PO-- /ix_dat3/ix_dat3.8
9140b758 67 11 0 250000 1 PO-- /ix_dat3/ix_dat3.9
9140b928 68 11 0 250000 1 PO-- /ix_dat3/ix_dat3.10
9140baf8 69 11 0 250000 1 PO-- /ix_dat3/ix_dat3.11
9140bcc8 70 11 0 250000 5 PO-- /ix_dat3/ix_dat3.12
9140c018 71 11 0 250000 5 PO-- /ix_dat3/ix_dat3.13
9140c1e8 72 11 0 250000 5 PO-- /ix_dat3/ix_dat3.14
9140c3b8 73 11 0 250000 5 PO-- /ix_dat3/ix_dat3.15
9140c588 74 11 0 250000 5 PO-- /ix_dat3/ix_dat3.16
9140c758 75 12 0 250000 6 PO-- /ix_idx3/ix_idx3.1
9140c928 76 12 0 250000 5 PO-- /ix_idx3/ix_idx3.2
9140caf8 77 12 0 250000 153677 PO-- /ix_idx3/ix_idx3.3
9140ccc8 78 13 0 250000 249947 PO-- /ix_temp/ix_temp.4
9140d018 79 11 0 250000 5 PO-- /ix_dat3/ix_dat3.17
9140d1e8 80 11 0 250000 35813 PO-- /ix_dat3/ix_dat3.18
9140d3b8 81 4 0 250000 0 PO-- /ix_dat1/ix_dat1.16
9140d588 82 4 0 250000 3784 PO-- /ix_dat1/ix_dat1.17
9140d758 83 4 0 250000 0 PO-- /ix_dat1/ix_dat1.18
9140d928 84 11 0 250000 249997 PO-- /ix_dat3/ix_dat3.19
9140daf8 85 12 0 250000 249997 PO-- /ix_idx3/ix_idx3.4
9140dcc8 86 4 0 250000 0 PO-- /ix_dat1/ix_dat1.19
9140e018 87 4 0 250000 0 PO-- /ix_dat1/ix_dat1.20
9140e1e8 88 4 0 250000 0 PO-- /ix_dat1/ix_dat1.21
9140e3b8 89 4 0 250000 231565 PO-- /ix_dat1/ix_dat1.22
9140e588 90 11 0 250000 249997 PO-- /ix_dat3/ix_dat3.20
9140e758 91 4 0 250000 249997 PO-- /ix_dat1/ix_dat1.23
91 active, 2047 maximum
NOTE: The values in the "size" and "free" columns for DBspace chunks are
displayed in terms of "pgsize" of the DBspace to which they belong.
Expanded chunk capacity mode: disabled
IBM Informix Dynamic Server Version 11.50.UC3W2 -- On-Line -- Up 6 days 14:58:18 -- 2051200
Kbytes
//root@ipdev:/ # onstat -d
IBM Informix Dynamic Server Version 11.50.UC3W2 -- On-Line -- Up 6 days 14:58:21 -- 2051200
Kbytes
Dbspaces
address number flags fchunk nchunks pgsize flags owner name
a0431810 1 0x1 1 1 4096 N informix rootdbs
a13ff3e0 2 0x1 2 1 4096 N informix llogdbs
a13ff540 3 0x1 3 1 4096 N informix plogdbs
a13ff6a0 4 0x1 4 22 4096 N informix datadbs1
a13ff800 5 0x1 19 25 4096 N informix datadbs2
a13ff960 6 0x1 34 7 4096 N informix indxdbs1
a13ffac0 7 0x1 37 5 4096 N informix indxdbs2
a13ffc20 8 0x2001 40 1 4096 N T informix tempdbs1
a13ffd80 9 0x2001 41 1 4096 N T informix tempdbs2
a1400018 10 0x2001 42 1 4096 N T informix tempdbs3
a1400178 11 0x1 59 19 4096 N informix datadbs3
a14002d8 12 0x1 75 4 4096 N informix indxdbs3
a1400438 13 0x2001 78 1 4096 N T informix tempdbs4
13 active, 2047 maximum
Chunks
address chunk/dbs offset size free bpages flags pathname
a0431970 1 1 0 55000 22677 PO-- /ix_root/ix_root.1
a1400598 2 2 0 250000 69947 PO-- /ix_llog/ix_llog.1
a1400768 3 3 0 64000 1447 PO-- /ix_plog/ix_plog.1
a1400938 4 4 0 250000 7 PO-- /ix_dat1/ix_dat1.1
a1400b08 5 4 0 250000 1252 PO-- /ix_dat1/ix_dat1.2
a1400cd8 6 4 0 250000 16 PO-- /ix_dat1/ix_dat1.3
a1401018 7 4 0 250000 392 PO-- /ix_dat1/ix_dat1.4
a14011e8 8 4 0 250000 0 PO-- /ix_dat1/ix_dat1.5
a14013b8 9 4 0 250000 0 PO-- /ix_dat1/ix_dat1.6
a1401588 10 4 0 250000 0 PO-- /ix_dat1/ix_dat1.7
a1401758 11 4 0 250000 0 PO-- /ix_dat1/ix_dat1.8
a1401928 12 4 0 250000 323 PO-- /ix_dat1/ix_dat1.9
a1401af8 13 4 0 250000 328 PO-- /ix_dat1/ix_dat1.10
a1401cc8 14 4 0 250000 72 PO-- /ix_dat1/ix_dat1.11
a1402018 15 4 0 250000 48 PO-- /ix_dat1/ix_dat1.12
a14021e8 16 4 0 250000 312 PO-- /ix_dat1/ix_dat1.13
a14023b8 17 4 0 250000 0 PO-- /ix_dat1/ix_dat1.14
a1402588 18 4 0 250000 0 PO-- /ix_dat1/ix_dat1.15
a1402758 19 5 0 250000 1 PO-- /ix_dat2/ix_dat2.1
a1402928 20 5 0 250000 1261 PO-- /ix_dat2/ix_dat2.2
a1402af8 21 5 0 250000 5 PO-- /ix_dat2/ix_dat2.3
a1402cc8 22 5 0 250000 5 PO-- /ix_dat2/ix_dat2.4
a1403018 23 5 0 250000 5 PO-- /ix_dat2/ix_dat2.5
a14031e8 24 5 0 250000 133 PO-- /ix_dat2/ix_dat2.6
a14033b8 25 5 0 250000 133 PO-- /ix_dat2/ix_dat2.7
a1403588 26 5 0 250000 133 PO-- /ix_dat2/ix_dat2.8
a1403758 27 5 0 250000 645 PO-- /ix_dat2/ix_dat2.9
a1403928 28 5 0 250000 30725 PO-- /ix_dat2/ix_dat2.10
a1403af8 29 5 0 250000 154885 PO-- /ix_dat2/ix_dat2.11
a1403cc8 30 5 0 250000 67589 PO-- /ix_dat2/ix_dat2.12
a1404018 31 5 0 250000 9221 PO-- /ix_dat2/ix_dat2.13
a14041e8 32 5 0 250000 92165 PO-- /ix_dat2/ix_dat2.14
a14043b8 33 5 0 250000 245765 PO-- /ix_dat2/ix_dat2.15
a1404588 34 6 0 250000 6 PO-- /ix_idx1/ix_idx1.1
a1404758 35 6 0 250000 4327 PO-- /ix_idx1/ix_idx1.2
a1404928 36 6 0 250000 71761 PO-- /ix_idx1/ix_idx1.3
a1404af8 37 7 0 250000 3 PO-- /ix_idx2/ix_idx2.1
a1404cc8 38 7 0 250000 173 PO-- /ix_idx2/ix_idx2.2
a1405018 39 7 0 250000 96369 PO-- /ix_idx2/ix_idx2.3
a14051e8 40 8 0 250000 249947 PO-- /ix_temp/ix_temp.1
a14053b8 41 9 0 250000 249947 PO-- /ix_temp/ix_temp.2
a1405588 42 10 0 250000 249947 PO-- /ix_temp/ix_temp.3
a1405758 43 7 0 250000 184845 PO-- /ix_idx2/ix_idx2.4
a1405928 44 5 0 250000 243717 PO-- /ix_dat2/ix_dat2.16
a1405af8 45 6 0 250000 106117 PO-- /ix_idx1/ix_idx1.4
a1405cc8 46 5 0 250000 249949 PO-- /ix_dat2/ix_dat2.17
a1406018 47 5 0 250000 229517 PO-- /ix_dat2/ix_dat2.18
a14061e8 48 6 0 250000 249997 PO-- /ix_idx1/ix_idx1.5
a14063b8 49 7 0 250000 249997 PO-- /ix_idx2/ix_idx2.5
a1406588 50 5 0 250000 233613 PO-- /ix_dat2/ix_dat2.19
a1406758 51 6 0 250000 249997 PO-- /ix_idx1/ix_idx1.6
a1406928 52 5 0 250000 249997 PO-- /ix_dat2/ix_dat2.20
a1406af8 53 5 0 250000 247437 PO-- /ix_dat2/ix_dat2.21
a1406cc8 54 5 0 250000 249997 PO-- /ix_dat2/ix_dat2.22
a1407018 55 5 0 250000 249997 PO-- /ix_dat2/ix_dat2.23
a14071e8 56 6 0 250000 249997 PO-- /ix_idx1/ix_idx1.7
a14073b8 57 5 0 250000 249997 PO-- /ix_dat2/ix_dat2.24
a1407588 58 5 0 250000 217229 PO-- /ix_dat2/ix_dat2.25
a1407758 59 11 0 250000 3 PO-- /ix_dat3/ix_dat3.1
a1407928 60 11 0 250000 1 PO-- /ix_dat3/ix_dat3.2
a1407af8 61 11 0 250000 1 PO-- /ix_dat3/ix_dat3.3
a1407cc8 62 11 0 250000 1 PO-- /ix_dat3/ix_dat3.4
a140b018 63 11 0 250000 1 PO-- /ix_dat3/ix_dat3.5
a140b1e8 64 11 0 250000 1 PO-- /ix_dat3/ix_dat3.6
a140b3b8 65 11 0 250000 1 PO-- /ix_dat3/ix_dat3.7
a140b588 66 11 0 250000 1 PO-- /ix_dat3/ix_dat3.8
a140b758 67 11 0 250000 1 PO-- /ix_dat3/ix_dat3.9
a140b928 68 11 0 250000 1 PO-- /ix_dat3/ix_dat3.10
a140baf8 69 11 0 250000 1 PO-- /ix_dat3/ix_dat3.11
a140bcc8 70 11 0 250000 5 PO-- /ix_dat3/ix_dat3.12
a140c018 71 11 0 250000 5 PO-- /ix_dat3/ix_dat3.13
a140c1e8 72 11 0 250000 5 PO-- /ix_dat3/ix_dat3.14
a140c3b8 73 11 0 250000 5 PO-- /ix_dat3/ix_dat3.15
a140c588 74 11 0 250000 5 PO-- /ix_dat3/ix_dat3.16
a140c758 75 12 0 250000 6 PO-- /ix_idx3/ix_idx3.1
a140c928 76 12 0 250000 113189 PO-- /ix_idx3/ix_idx3.2
a140caf8 77 12 0 250000 244317 PO-- /ix_idx3/ix_idx3.3
a140ccc8 78 13 0 250000 249947 PO-- /ix_temp/ix_temp.4
a140d018 79 11 0 250000 5 PO-- /ix_dat3/ix_dat3.17
a140d1e8 80 11 0 250000 106773 PO-- /ix_dat3/ix_dat3.18
a140d3b8 81 4 0 250000 0 PO-- /ix_dat1/ix_dat1.16
a140d588 82 4 0 250000 2888 PO-- /ix_dat1/ix_dat1.17
a140d758 83 4 0 250000 0 PO-- /ix_dat1/ix_dat1.18
a140d928 84 11 0 250000 249997 PO-- /ix_dat3/ix_dat3.19
a140daf8 85 12 0 250000 249997 PO-- /ix_idx3/ix_idx3.4
a140dcc8 86 4 0 250000 0 PO-- /ix_dat1/ix_dat1.19
a140e018 87 4 0 250000 0 PO-- /ix_dat1/ix_dat1.20
a140e1e8 88 4 0 250000 0 PO-- /ix_dat1/ix_dat1.21
a140e3b8 89 4 0 250000 249485 PO-- /ix_dat1/ix_dat1.22
89 active, 2047 maximum
NOTE: The values in the "size" and "free" columns for DBspace chunks are
displayed in terms of "pgsize" of the DBspace to which they belong.
Expanded chunk capacity mode: disabled
//root@ipdev:/ # env
_=/usr/bin/env
LANG=en_US
WSM_WS_CMD=/usr/HTTPServer/bin/apachectl restart
LOGIN=root
IMQCONFIGCL=/etc/IMNSearch/dbcshelp
QMEHOST=newip
PATH=/usr/apps/inf/ver115UC3/bin:/usr/vac/bin:/usr/java6/jre/bin:/usr/apps/inf/ver115UC3:/usr
/apps/vpom/bin:/usr/apps/vpom/db:/usr/bin:/etc:/usr/sbin:/usr/ucb:/usr/bin/X11:/sbin:.::/usr/
opt/ifor/ls/os/aix/bin:/opt/LicenseUseManagement/bin
INFBKUP=/login/infown/bkup
EXTENDED_HISTORY=ON
INFINC=/usr/apps/inf/ver115UC3/incl
INFXCPUVPPRIORITY=90
LC__FASTMSG=true
CGI_DIRECTORY=/usr/HTTPServer/cgi-bin
IMQCONFIGSRV=/etc/IMNSearch
INFPLATFORM=IBMAIX
CLASSPATH=/usr/apps/inf/ver115UC3/jdbc/lib:
LOGNAME=root
MAIL=/usr/spool/mail/root
LOCPATH=/usr/lib/nls/loc
PS1=//root@ipdev:${PWD} #
INFVERSION=ver115UC3
TERMCAP=/usr/apps/inf/ver115UC3/etc/termcap
WSM_DOC_DIR="/usr/HTTPServer/htdocs"
INFLOGDIR=/login/infown/log
DOCUMENT_SERVER_MACHINE_NAME=newip.lgi
USER=root
AUTHSTATE=compat
INFXIOVPPRIORITY=90
INFROOT=/usr/apps/inf
DEFAULT_BROWSER=netscape
DISPLAY=exceed2:0.0
SHELL=/usr/bin/ksh
ODMDIR=/etc/objrepos
INFORMIXTERM=termcap
DOCUMENT_SERVER_PORT=80
HOME=/
INFORMIXDIR=/usr/apps/inf/ver115UC3
INFBIN=/usr/apps/inf/ver115UC3/bin
TERM=vt100
MAILMSG=[YOU HAVE NEW MAIL]
ONCONFIG=onconfig_systestdb
INFXNETVPPRIORITY=90
PWD=/
INFLIB=/usr/apps/inf/ver115UC3/lib
DOCUMENT_DIRECTORY=/usr/HTTPServer/htdocs
TZ=EST5EDT
ARC_CONFIG=onarconfig_systestdb
INFXMSCVPPRIORITY=90
WSM_CGI_DIR=/usr/HTTPServer/cgi-bin
SYSROOT=/usr/apps
INFORMIXSERVER=systestdb
A__z=! LOGNAME
NLSPATH=/usr/lib/nls/msg/%L/%N:/usr/lib/nls/msg/%L/%N.cat
#**************************************************************************
#
# Licensed Material - Property Of IBM
#
# "Restricted Materials of IBM"
#
# IBM Informix Dynamic Server
# (c) Copyright IBM Corporation 1996, 2004 All rights reserved.
#
# Title: sqlhosts.demo
# Description:
# Default sqlhosts file for running demos.
#
#**************************************************************************
# IANA (www.iana.org) assigned port number/service names for Informix:
# sqlexec 9088/tcp
# sqlexec-ssl 9089/tcp
#demo_on onipcshm on_hostname on_servername
ol_informix1170 onsoctcp db2cm64 ol_informix1170
#demo_se seipcpip se_hostname sqlexec
#Systest database;
systestdb onsoctcp ipdev systestdbsvc
#Below are for archive database;
ardb onsoctcp ipdev ardbsvc
#prf onsoctcp ipdev prfsvc
#Old production database;
ipdb onsoctcp ifx01 ipdbsvc
###################################################################
# Licensed Material - Property Of IBM
#
# "Restricted Materials of IBM"
#
# IBM Informix Dynamic Server
# Copyright IBM Corporation 1996, 2008 All rights reserved.
#
# Title: onconfig.std
# Description: IBM Informix Dynamic Server Configuration Parameters
#
# Important: $INFORMIXDIR now resolves to the environment
# variable INFORMIXDIR. Replace the value of the INFORMIXDIR
# environment variable only if the path you want is not under
# $INFORMIXDIR.
#
# For additional information on the parameters:
# http://publib.boulder.ibm.com/infocenter/idshelp/v115/index.jsp
###################################################################
###################################################################
# Root Dbspace Configuration Parameters
###################################################################
# ROOTNAME - The root dbspace name to contain reserved pages and
# internal tracking tables.
# ROOTPATH - The path for the device containing the root dbspace
# ROOTOFFSET - The offset, in KB, of the root dbspace into the
# device. The offset is required for some raw devices.
# ROOTSIZE - The size of the root dbspace, in KB. The value of
# 200000 allows for a default user space of about
# 100 MB and the default system space requirements.
# MIRROR - Enable (1) or disable (0) mirroring
# MIRRORPATH - The path for the device containing the mirrored
# root dbspace
# MIRROROFFSET - The offset, in KB, into the mirrored device
#
# Warning: Always verify ROOTPATH before performing
# disk initialization (oninit -i or -iy) to
# avoid disk corruption of another instance
###################################################################
ROOTNAME rootdbs
ROOTPATH /ix_root/ix_root.1
ROOTOFFSET 0
ROOTSIZE 220000
MIRROR 0
#MIRRORPATH $INFORMIXDIR/tmp/demo_on.root_mirror
MIRRORPATH
MIRROROFFSET 0
###################################################################
# Physical Log Configuration Parameters
###################################################################
# PHYSFILE - The size, in KB, of the physical log on disk.
# If RTO_SERVER_RESTART is enabled, the
# suggested formula for the size of PHSYFILE
# (up to about 1 GB) is:
# PHYSFILE = Size of BUFFERS * 1.1
# PLOG_OVERFLOW_PATH - The directory for extra physical log files
# if the physical log overflows during recovery
# or long transaction rollback
# PHYSBUFF - The size of the physical log buffer, in KB
###################################################################
PHYSFILE 250000
#PLOG_OVERFLOW_PATH $INFORMIXDIR/tmp
PLOG_OVERFLOW_PATH
PHYSBUFF 128
###################################################################
# Logical Log Configuration Parameters
###################################################################
# LOGFILES - The number of logical log files
# LOGSIZE - The size of each logical log, in KB
# DYNAMIC_LOGS - The type of dynamic log allocation.
# Acceptable values are:
# 2 Automatic. IDS adds a new logical log to the
# root dbspace when necessary.
# 1 Manual. IDS notifies the DBA to add new logical
# logs when necessary.
# 0 Disabled
# LOGBUFF - The size of the logical log buffer, in KB
###################################################################
LOGFILES 72
LOGSIZE 10000
DYNAMIC_LOGS 1
LOGBUFF 64
###################################################################
# Long Transaction Configuration Parameters
###################################################################
# If IDS cannot roll back a long transaction, the server hangs
# until more disk space is available.
#
# LTXHWM - The percentage of the logical logs that can be
# filled before a transaction is determined to be a
# long transaction and is rolled back
# LTXEHWM - The percentage of the logical logs that have been
# filled before the server suspends all other
# transactions so that the long transaction being
# rolled back has exclusive use of the logs
#
# When dynamic logging is on, you can set higher values for
# LTXHWM and LTXEHWM because the server can add new logical logs
# during long transaction rollback. Set lower values to limit the
# number of new logical logs added.
#
# If dynamic logging is off, set LTXHWM and LTXEHWM to
# lower values, such as 50 and 60 or lower, to prevent long
# transaction rollback from hanging the server due to lack of
# logical log space.
#
# When using Enterprise Replication, set LTXEHWM to at least 30%
# higher than LTXHWM to minimize log overruns.
###################################################################
LTXHWM 50
LTXEHWM 60
###################################################################
# Server Message File Configuration Parameters
###################################################################
# MSGPATH - The path of the IDS message log file
# CONSOLE - The path of the IDS console message file
###################################################################
MSGPATH /login/infown/log/systestdb/online.log
MSG_DATE 1
CONSOLE /login/infown/log/systestdb/online.con
###################################################################
# Tblspace Configuration Parameters
###################################################################
# TBLTBLFIRST - The first extent size, in KB, for the tblspace
# tblspace. Must be in multiples of the page size.
# TBLTBLNEXT - The next extent size, in KB, for the tblspace
# tblspace. Must be in multiples of the page size.
# The default setting for both is 0, which allows IDS to manage
# extent sizes automatically.
#
# TBLSPACE_STATS - Enables (1) or disables (0) IDS to maintain
# tblspace statistics
##################################################################
TBLTBLFIRST 0
TBLTBLNEXT 0
TBLSPACE_STATS 1
###################################################################
# Temporary dbspace and sbspace Configuration Parameters
###################################################################
# DBSPACETEMP - The list of dbspaces used to store temporary
# tables and other objects. Specify a colon
# separated list of dbspaces that exist when the
# server is started. If no dbspaces are specified,
# or if all specified dbspaces are not valid,
# temporary files are created in the /tmp directory
# instead.
# SBSPACETEMP - The list of sbspaces used to store temporary
# tables for smart large objects. If no sbspace
# is specified, temporary files are created in
# a standard sbspace.
###################################################################
DBSPACETEMP tempdbs1,tempdbs2,tempdbs3,tempdbs4
SBSPACETEMP
###################################################################
# Dbspace and sbspace Configuration Parameters
###################################################################
# SBSPACENAME - The default sbspace name where smart large objects
# are stored if no sbspace is specified during
# smart large object creation. Some DataBlade
# modules store smart large objects in this
# location.
# SYSSBSPACENAME - The default sbspace for system statistics
# collection. Otherwise, IDS stores statistics
# in the sysdistrib system catalog table.
# ONDBSPACEDOWN - Specifies how IDS behaves when it encounters a
# dbspace that is offline. Acceptable values
# are:
# 0 Continue
# 1 Stop
# 2 Wait for DBA action
###################################################################
SBSPACENAME
SYSSBSPACENAME
ONDBSPACEDOWN 2
###################################################################
# System Configuration Parameters
###################################################################
# SERVERNUM - The unique ID for the IDS instance. Acceptable
# values are 0 through 255, inclusive.
# DBSERVERNAME - The name of the default database server
# DBSERVERALIASES - The list of up to 32 alternative dbservernames,
# separated by commas
###################################################################
SERVERNUM 50
DBSERVERNAME systestdb
DBSERVERALIASES
###################################################################
# Network Configuration Parameters
###################################################################
# NETTYPE - The configuration of poll threads
# for a specific protocol. The
# format is:
# NETTYPE <protocol>,<# poll threads>
# ,<number of connections/thread>
# ,(NET|CPU)
# You can include multiple NETTYPE
# entries for multiple protocols.
# LISTEN_TIMEOUT - The number of seconds that IDS
# waits for a connection
# MAX_INCOMPLETE_CONNECTIONS - The maximum number of incomplete
# connections before IDS logs a Denial
# of Service (DoS) error
# FASTPOLL - Enables (1) or disables (0) fast
# polling of your network, if your
# operating system supports it.
###################################################################
NETTYPE soctcp,2,100,CPU
LISTEN_TIMEOUT 60
MAX_INCOMPLETE_CONNECTIONS 1024
FASTPOLL 1
###################################################################
# CPU-Related Configuration Parameters
###################################################################
# MULTIPROCESSOR - Specifies whether the computer has multiple
# CPUs. Acceptable values are: 0 (single
# processor), 1 (multiple processors or
# multi-core chips)
# VPCLASS cpu - Configures the CPU VPs. The format is:
# VPCLASS cpu,num=<#>[,max=<#>][,aff=<#>]
# [,noage]
# VP_MEMORY_CACHE_KB - Specifies the amount of private memory
# blocks of your CPU VP, in KB, that the
# database server can access.
# Acceptable values are:
# 0 (disable)
# 800 through 40% of the value of SHMTOTAL
# SINGLE_CPU_VP - Optimizes performance if IDS runs with
# only one CPU VP. Acceptable values are:
# 0 multiple CPU VPs
# Any nonzero value (optimize for one CPU VP)
###################################################################
MULTIPROCESSOR 1
#VPCLASS cpu,num=6,noage
VPCLASS cpu,num=8,noage
VP_MEMORY_CACHE_KB 0
SINGLE_CPU_VP 0
###################################################################
# AIO and Cleaner-Related Configuration Parameters
###################################################################
# VPCLASS aio - Configures the AIO VPs. The format is:
# VPCLASS aio,num=<#>[,max=<#>][,aff=<#>][,noage]
# CLEANERS - The number of page cleaner threads
# AUTO_AIOVPS - Enables (1) or disables (0) automatic management
# of AIO VPs
# DIRECT_IO - Enables (1) or disables (0) direct I/O for chunks
###################################################################
#VPCLASS aio,num=6
VPCLASS aio,num=36
CLEANERS 16
AUTO_AIOVPS 1
DIRECT_IO 0
#DIRECT_IO 1
###################################################################
# Lock-Related Configuration Parameters
###################################################################
# LOCKS - The initial number of locks when IDS starts.
# Dynamic locking can add extra locks if needed.
# DEF_TABLE_LOCKMODE - The default table lock mode for new tables.
# Acceptable values are ROW and PAGE (default).
###################################################################
LOCKS 3000000
DEF_TABLE_LOCKMODE ROW
###################################################################
# Shared Memory Configuration Parameters
###################################################################
# RESIDENT - Controls whether shared memory is resident.
# Acceptable values are:
# 0 off (default)
# 1 lock the resident segment only
# n lock the resident segment and the next n-1
# virtual segments, where n < 100
# -1 lock all resident and virtual segments
# SHMBASE - The shared memory base address; do not change
# SHMVIRTSIZE - The initial size, in KB, of the virtual
# segment of shared memory
# SHMADD - The size, in KB, of additional virtual shared
# memory segments
# EXTSHMADD - The size, in KB, of each extension shared
# memory segment
# SHMTOTAL - The maximum amount of shared memory for IDS,
# in KB. A 0 indicates no specific limit.
# SHMVIRT_ALLOCSEG - Controls when IDS adds a memory segment and
# the alarm level if the memory segment cannot
# be added.
# For the first field, acceptable values are:
# - 0 Disabled
# - A decimal number indicating the percentage
# of memory used before a segment is added
# - The number of KB remaining when a segment
# is added
# For the second field, specify an alarm level
# from 1 (non-event) to 5 (fatal error).
# SHMNOACCESS - A list of up to 10 memory address ranges
# that IDS cannot use to attach shared memory.
# Each address range is the start and end memory
# address in hex format, separated by a hyphen.
# Use a comma to separate each range in the list.
###################################################################
RESIDENT 0
#SHMBASE 0x30000000L
SHMBASE 0x40000000L
SHMVIRTSIZE 500000
SHMADD 100000
EXTSHMADD 100000
SHMTOTAL 0
SHMVIRT_ALLOCSEG 0,3
SHMNOACCESS
###################################################################
# Checkpoint and System Block Configuration Parameters
###################################################################
# CKPINTVL - Specifies how often, in seconds, IDS checks
# if a checkpoint is needed. 0 indicates that
# IDS does not check for checkpoints. Ignored
# if RTO_SERVER_RESTART is set.
# AUTO_CKPTS - Enables (1) or disables (0) monitoring of
# critical resource to trigger checkpoints
# more frequently if there is a chance that
# transaction blocking might occur.
# RTO_SERVER_RESTART - Specifies, in seconds, the Recovery Time
# Objective for IDS restart after a server
# failure. Acceptable values are 0 (off) and
# any number from 60-1800, inclusive.
# BLOCKTIMEOUT - Specifies the amount of time, in seconds,
# for a system block.
###################################################################
CKPTINTVL 600
AUTO_CKPTS 1
RTO_SERVER_RESTART 0
BLOCKTIMEOUT 3600
###################################################################
# Transaction-Related Configuration Parameters
###################################################################
# TXTIMEOUT - The distributed transaction timeout, in seconds
# DEADLOCK_TIMEOUT - The maximum time, in seconds, to wait for a
# lock in a distributed transaction.
# HETERO_COMMIT - Enables (1) or disables (0) heterogeneous
# commits for a distributed transaction
# involving an EGM gateway.
###################################################################
TXTIMEOUT 300
DEADLOCK_TIMEOUT 60
HETERO_COMMIT 0
###################################################################
# ontape Tape Device Configuration Parameters
###################################################################
# TAPEDEV - The tape device path for backups. To use standard
# I/O instead of a device, set to stdio.
# TAPEBLK - The tape block size, in KB, for backups
# TAPESIZE - The maximum amount of data to put on one backup
# tape. Acceptable values are 0 (unlimited) or any
# positive integral multiple of TAPEBLK.
###################################################################
#TAPEDEV /dev/rmt0
TAPEDEV /dev/null
TAPEBLK 1024
TAPESIZE 72000000
###################################################################
# ontape Logial Log Tape Device Configuration Parameters
###################################################################
# LTAPEDEV - The tape device path for logical logs
# LTAPEBLK - The tape block size, in KB, for backing up logical
# logs
# LTAPESIZE - The maximum amount of data to put on one logical
# log tape. Acceptable values are 0 (unlimited) or any
# positive integral multiple of LTAPEBLK.
###################################################################
LTAPEDEV /dev/null
LTAPEBLK 1024
LTAPESIZE 72000000
###################################################################
# Backup and Restore Configuration Parameters
###################################################################
# BAR_ACT_LOG - The ON-Bar activity log file location.
# Do not use the /tmp directory. Use a
# directory with restricted permissions.
# BAR_DEBUG_LOG - The ON-Bar debug log file location.
# Do not use the /tmp directory. Use a
# directory with restricted permissions.
# BAR_DEBUG - The debug level for ON-Bar. Acceptable
# values are 0 (off) through 9 (high).
# BAR_MAX_BACKUP - The number of backup threads used in a
# backup. Acceptable values are 0 (unlimited)
# or any positive integer.
# BAR_RETRY - Specifies the number of time to retry a
# backup or restore operation before reporting
# a failure
# BAR_NB_XPORT_COUNT - Specifies the number of data buffers that
# each onbar_d process uses to communicate
# with the database server
# BAR_XFER_BUF_SIZE - The size, in pages, of each data buffer.
# Acceptable values are 1 through 15 for
# 4 KB pages and 1 through 31 for 2 KB pages.
# RESTARTABLE_RESTORE - Enables ON-Bar to continue a backup after a
# failure. Acceptable values are OFF or ON.
# BAR_PROGRESS_FREQ - Specifies, in minutes, how often progress
# messages are placed in the ON-Bar activity
# log. Acceptable values are: 0 (record only
# completion messages) or 5 and above.
# BAR_BSALIB_PATH - The shared library for ON-Bar and the
# storage manager. The default value is
# $INFORMIXDIR/lib/ibsad001 (with a
# platform-specific file extension).
# BACKUP_FILTER - Specifies the pathname of a filter program
# to transform data during a backup, plus any
# program options
# RESTORE_FILTER - Specifies the pathname of a filter program
# to transform data during a restore, plus any
# program options
# BAR_PERFORMANCE - Specifies the type of performance statistics
# to report to the ON-Bar activity log for backup
# and restore operations.
# Acceptable values are:
# 0 = Turn off performance monitoring (Default)
# 1 = Display the time spent transferring data
# between the IDS instance and the storage
# manager
# 2 = Display timestamps in microseconds
# 3 = Display both timestamps and transfer
# statistics
###################################################################
BAR_ACT_LOG /login/infown/log/systestdb/bar_act.log
BAR_DEBUG_LOG /login/infown/log/systestdb/bar_dbug.log
BAR_DEBUG 0
BAR_MAX_BACKUP 0
BAR_RETRY 1
BAR_NB_XPORT_COUNT 20
BAR_XFER_BUF_SIZE 31
RESTARTABLE_RESTORE ON
BAR_PROGRESS_FREQ 0
BAR_BSALIB_PATH
BACKUP_FILTER
RESTORE_FILTER
BAR_PERFORMANCE 0
###################################################################
# Informix Storage Manager (ISM) Configuration Parameters
###################################################################
# ISM_DATA_POOL - Specifies the name for the ISM data pool
# ISM_LOG_POOL - Specifies the name for the ISM log pool
###################################################################
ISM_DATA_POOL ISMData
ISM_LOG_POOL ISMLogs
###################################################################
# Data Dictionary Cache Configuration Parameters
###################################################################
# DD_HASHSIZE - The number of data dictionary pools. Set to any
# positive integer; a prime number is recommended.
# DD_HASHMAX - The number of entries per pool.
# Set to any positive integer.
###################################################################
DD_HASHSIZE 31
DD_HASHMAX 10
###################################################################
# Data Distribution Configuration Parameters
###################################################################
# DS_HASHSIZE - The number of data Ddstribution pools.
# Set to any positive integer; a prime number is
# recommended.
# DS_POOLSIZE - The maximum number of entries in the data
# distribution cache. Set to any positive integer.
###################################################################
DS_HASHSIZE 31
DS_POOLSIZE 127
##################################################################
# User Defined Routine (UDR) Cache Configuration Parameters
##################################################################
# PC_HASHSIZE - The number of UDR pools. Set to any
# positive integer; a prime number is recommended.
# PC_POOLSIZE - The maximum number of entries in the
# UDR cache. Set to any positive integer.
###################################################################
PC_HASHSIZE 31
PC_POOLSIZE 127
###################################################################
# SQL Statement Cache Configuration Parameters
###################################################################
# STMT_CACHE - Controls SQL statement caching. Acceptable
# values are:
# 0 Disabled
# 1 Enabled at the session level
# 2 All statements are cached
# STMT_CACHE_HITS - The number of times an SQL statement must be
# executed before becoming fully cached.
# 0 indicates that all statements are
# fully cached the first time.
# STMT_CACHE_SIZE - The size, in KB, of the SQL statement cache
# STMT_CACHE_NOLIMIT - Controls additional memory consumption.
# Acceptable values are:
# 0 Limit memory to STMT_CACHE_SIZE
# 1 Obtain as much memory, temporarily, as needed
# STMT_CACHE_NUMPOOL - The number of pools for the SQL statement
# cache. Acceptable value is a positive
# integer between 1 and 256, inclusive.
###################################################################
STMT_CACHE 0
STMT_CACHE_HITS 0
STMT_CACHE_SIZE 512
STMT_CACHE_NOLIMIT 0
STMT_CACHE_NUMPOOL 1
###################################################################
# Operating System Session-Related Configuration Parameters
###################################################################
# USEOSTIME - The precision of SQL statement timing.
# Accepted values are 0 (precision to seconds)
# and 1 (precision to subseconds). Subsecond
# precision can degrade performance.
# STACKSIZE - The size, in KB, for a session stack
# ALLOW_NEWLINE - Controls whether embedded new line characters
# in string literals are allowed in SQL
# statements. Acceptable values are 1 (allowed)
# and any number other than 1 (not allowed).
# USELASTCOMMITTED - Controls the committed read isolation level.
# Acceptable values are:
# - NONE Waits on a lock
# - DIRTY READ Uses the last committed value in
# place of a dirty read
# - COMMITTED READ Uses the last committed value
# in place of a committed read
# - ALL Uses the last committed value in place
# of all isolation levels that support the last
# committed option
###################################################################
USEOSTIME 0
STACKSIZE 64
ALLOW_NEWLINE 0
USELASTCOMMITTED NONE
###################################################################
# Index Related Configuration Parameters
###################################################################
# FILLFACTOR - The percentage of index page fullness
# MAX_FILL_DATA_PAGES - Enables (1) or disables (0) filling data
# pages that have variable length rows as
# full as possible
# BTSCANNER - Specifies the configuration settings for all
# btscanner threads. The format is:
# BTSCANNER num=<#>,threshold=<#>,rangesize=<#>,
# alice=(0-12),compression=[low|med|high|default]
# ONLIDX_MAXMEM - The amount of memory, in KB, allocated for
# the pre-image pool and updator log pool for
# each partition.
###################################################################
FILLFACTOR 90
MAX_FILL_DATA_PAGES 0
#BTSCANNER num=1,threshold=5000,rangesize=-1,alice=6,compression=default
BTSCANNER num=2,threshold=500,rangesize=-1,alice=6,compression=default
ONLIDX_MAXMEM 5120
###################################################################
# Parallel Database Query (PDQ) Configuration Parameters
###################################################################
# MAX_PDQPRIORITY - The maximum amount of resources, as a
# percentage, that PDQ can allocate to any
# one decision support query
# DS_MAX_QUERIES - The maximum number of concurrent decision
# support queries
# DS_TOTAL_MEMORY - The maximum amount, in KB, of decision
# support query memory
# DS_MAX_SCANS - The maximum number of concurrent decision
# support scans
# DS_NONPDQ_QUERY_MEM - The amount of non-PDQ query memory, in KB.
# Acceptable values are 128 to 25% of
# DS_TOTAL_MEMORY.
# DATASKIP - Specifies whether to skip dbspaces when
# processing a query. Acceptable values are:
# - ALL Skip all unavailable fragments
# - ON <dbspace1> <dbspace2>... Skip listed
# dbspaces
# - OFF Do not skip dbspaces (default)
###################################################################
MAX_PDQPRIORITY 100
DS_MAX_QUERIES
DS_TOTAL_MEMORY
DS_MAX_SCANS 1048576
DS_NONPDQ_QUERY_MEM 128
DATASKIP
###################################################################
# Optimizer Configuration Parameters
###################################################################
# OPTCOMPIND - Controls how the optimizer determines the best
# query path. Acceptable values are:
# 0 Nested loop joins are preferred
# 1 If isolation level is repeatable read,
# works the same as 0, otherwise works same as 2
# 2 Optimizer decisions are based on cost only
# DIRECTIVES - Specifies whether optimizer directives are
# enabled (1) or disabled (0). Default is 1.
# EXT_DIRECTIVES - Controls the use of external SQL directives.
# Acceptable values are:
# 0 Disabled
# 1 Enabled if the IFX_EXTDIRECTIVES environment
# variable is enabled
# 2 Enabled even if the IFX_EXTDIRECTIVES
# environment is not set
# OPT_GOAL - Controls how the optimizer should optimize for
# fastest retrieval. Acceptable values are:
# -1 All rows in a query
# 0 The first rows in a query
# IFX_FOLDVIEW - Enables (1) or disables (0) folding views that
# have multiple tables or a UNION ALL clause.
# Disabled by default.
# AUTO_REPREPARE - Enables (1) or disables (0) automatically
# re-optimizing stored procedures and re-preparing
# prepared statements when tables that are referenced
# by them change. Minimizes the occurrence of the
# -710 error.
####################################################################
OPTCOMPIND 2
DIRECTIVES 1
EXT_DIRECTIVES 0
OPT_GOAL -1
IFX_FOLDVIEW 0
AUTO_REPREPARE 1
###################################################################
# Read-ahead Configuration Parameters
###################################################################
#RA_PAGES - The number of pages, as a positive integer, to
# attempt to read ahead
#RA_THRESHOLD - The number of pages, as a postive integer, left
# before the next read-ahead group
###################################################################
RA_PAGES 64
RA_THRESHOLD 16
###################################################################
# SQL Tracing and EXPLAIN Plan Configuration Parameters
###################################################################
# EXPLAIN_STAT - Enables (1) or disables (0) including the Query
# Statistics section in the EXPLAIN output file
# SQLTRACE - Configures SQL tracing. The format is:
# SQLTRACE level=(low|med|high),ntraces=<#>,size=<#>,
# mode=(global|user)
###################################################################
EXPLAIN_STAT 0
#SQLTRACE level=low,ntraces=1000,size=2,mode=global
###################################################################
# Security Configuration Parameters
###################################################################
# DBCREATE_PERMISSION - Specifies the users who can create
# databases (by default, any user can).
# Add a DBCREATE_PERMISSION entry
# for each user who needs database
# creation privileges. Ensure user
# informix is authorized when you
# first initialize IDS.
# DB_LIBRARY_PATH - Specifies the locations, separated
# by commas, from which IDS can use
# UDR or UDT shared libraries. If set,
# make sure that all directories containing
# the blade modules are listed, to
# ensure all DataBlade modules will
# work.
# IFX_EXTEND_ROLE - Controls whether administrators
# can use the EXTEND role to specify
# which users can register external
# routines. Acceptable values are:
# 0 Any user can register external
# routines
# 1 Only users granted the ability
# to register external routines
# can do so (Default)
# SECURITY_LOCALCONNECTION - Specifies whether IDS performs
# security checking for local
# connections. Acceptable values are:
# 0 Off
# 1 Validate ID
# 2 Validate ID and port
# UNSECURE_ONSTAT - Controls whether non-DBSA users are
# allowed to run all onstat commands.
# Acceptable values are:
# 1 Enabled
# 0 Disabled (Default)
# ADMIN_USER_MODE_WITH_DBSA - Controls who can connect to IDS
# in administration mode. Acceptable
# values are:
# 1 DBSAs, users specified by
# ADMIN_MODE_USERS, and the user
# informix
# 0 Only the user informix (Default)
# ADMIN_MODE_USERS - Specifies the user names, separated by
# commas, who can connect to IDS in
# administration mode, in addition to
# the user informix
# SSL_KEYSTORE_LABEL - The label, up to 512 characters, of
# the IDS certificate used in Secure
# Sockets Layer (SSL) protocol
# communications.
###################################################################
DBCREATE_PERMISSION informix
#DB_LIBRARY_PATH
IFX_EXTEND_ROLE 1
SECURITY_LOCALCONNECTION
UNSECURE_ONSTAT 1
ADMIN_USER_MODE_WITH_DBSA
ADMIN_MODE_USERS
SSL_KEYSTORE_LABEL
###################################################################
# LBAC Configuration Parameters
###################################################################
# PLCY_POOLSIZE - The maximum number of entries in each hash
# bucket of the LBAC security information cache
# PLCY_HASHSIZE - The number of hash buckets in the LBAC security
# information cache
# USRC_POOLSIZE - The maximum number of entries in each hash
# bucket of the LBAC credential memory cache
# USRC_HASHSIZE - The number of hash buckets in the LBAC credential
# memory cache
###################################################################
PLCY_POOLSIZE 127
PLCY_HASHSIZE 31
USRC_POOLSIZE 127
USRC_HASHSIZE 31
###################################################################
# Optical Configuration Parameters
###################################################################
# STAGEBLOB - The name of the optical blobspace. Must be set to
# use the optical-storage subsystem.
# OPCACHEMAX - Maximum optical cache size, in KB
###################################################################
STAGEBLOB
OPCACHEMAX 0
###################################################################
# High Availability and Enterprise Replication Security
# Configuration Parameters
###################################################################
# ENCRYPT_HDR - Enables (1) or disables (0) encryption for HDR.
# ENCRYPT_SMX - Controls the level of encryption for RSS and
# SDS servers. Acceptable values are:
# 0 Do not encrypt (Default)
# 1 Encrypt if possible
# 2 Always encrypt
# ENCRYPT_CDR - Controls the level of encryption for ER.
# Acceptable values are:
# 0 Do not encrypt (Default)
# 1 Encrypt if possible
# 2 Always encrypt
# ENCRYPT_CIPHERS - A list of encryption ciphers and modes,
# separated by commas. Default is all.
# ENCRYPT_MAC - Controls the level of message authentication
# code (MAC). Acceptable values are off, high,
# medium, and low. List multiple values separated
# by commas; the highest common level between
# servers is used.
# ENCRYPT_MACFILE - The paths of the MAC key files, separated
# by commas. Use the builtin keyword to specify
# the built-in key. Default is builtin.
# ENCRYPT_SWITCH - Defines the frequencies, in minutes, at which
# ciphers and keys are renegotiated. Format is:
# <cipher_switch_time>,<key_switch_time>
# Default is 60,60.
###################################################################
ENCRYPT_HDR
ENCRYPT_SMX
ENCRYPT_CDR 0
ENCRYPT_CIPHERS
ENCRYPT_MAC
ENCRYPT_MACFILE
ENCRYPT_SWITCH
###################################################################
# Enterprise Replication (ER) Configuration Parameters
###################################################################
# CDR_EVALTHREADS - The number of evaluator threads per
# CPU VP and the number of additional
# threads, separated by a comma.
# Acceptable values are: a non-zero value
# followed by a non-negative value
# CDR_DSLOCKWAIT - The number of seconds the Datasync
# waits for database locks.
# CDR_QUEUEMEM - The maximum amount of memory, in KB,
# for the send and receive queues.
# CDR_NIFCOMPRESS - Controls the network interface
# compression level.
# Acceptable values are:
# -1 Never
# 0 None
# 1-9 Compression level
# CDR_SERIAL - Specifies the incremental size and
# the starting value of replicated
# serial columns. The format is:
# <delta>,<offset>
# CDR_DBSPACE - The dbspace name for the syscdr
# database.
# CDR_QHDR_DBSPACE - The name of the transaction record
# dbspace. Default is the root dbspace.
# CDR_QDATA_SBSPACE - The names of sbspaces for spooled
# transaction data, separated by commas.
# CDR_MAX_DYNAMIC_LOGS - The maximum number of dynamic log
# requests that ER can make within one
# server session. Acceptable values are:
# -1 (unlimited), 0 (disabled),
# 1 through n (limit to n requests)
# CDR_SUPPRESS_ATSRISWARN - The Datasync error and warning code
# numbers to be suppressed in ATS and RIS
# files. Acceptable values are: numbers
# or ranges of numbers separated by commas.
# Separate numbers in a range by a hyphen.
###################################################################
CDR_EVALTHREADS 1,2
CDR_DSLOCKWAIT 5
CDR_QUEUEMEM 4096
CDR_NIFCOMPRESS 0
CDR_SERIAL 0
CDR_DBSPACE
CDR_QHDR_DBSPACE
CDR_QDATA_SBSPACE
CDR_MAX_DYNAMIC_LOGS 0
CDR_SUPPRESS_ATSRISWARN
###################################################################
# High Availability Cluster (HDR, SDS, and RSS)
# Configuration Parameters
###################################################################
# DRAUTO - Controls automatic failover of primary
# servers. Valid for HDR, SDS, and RSS.
# Acceptable values are:
# 0 Manual
# 1 Retain server type
# 2 Reverse server type
# 3 Connection Manager Arbitrator controls
# server type
# DRINTERVAL - The maximum interval, in seconds, between HDR
# buffer flushes. Valid for HDR only.
# DRTIMEOUT - The time, in seconds, before a network
# timeout occurs. Valid for HDR only.
# DRLOSTFOUND - The path of the HDR lost-and-found file.
# Valid of HDR only.
# DRIDXAUTO - Enables (1) or disables (0) automatic index
# repair for an HDR pair. Default is 0.
# HA_ALIAS - The server alias for a high-availability
# cluster. Must be the same as a value of
# DBSERVERNAME or DBSERVERALIASES that uses a
# network-based connection type. Valid for HDR,
# SDS, and RSS.
# LOG_INDEX_BUILDS - Enable (1) or disable (0) index page logging.
# Required for RSS. Optional for HDR and SDS.
# SDS_ENABLE - Enables (1) or disables (0) an SDS server.
# Set this value on an SDS server after setting
# up the primary. Valid for SDS only.
# SDS_TIMEOUT - The time, in seconds, that the primary waits
# for an acknowledgement from an SDS server
# while performing page flushing before marking
# the SDS server as down. Valid for SDS only.
# SDS_TEMPDBS - The temporary dbspace used by an SDS server.
# The format is:
# <dbspace_name>,<path>,<pagesize in KB>,<offset in KB>,
# <size in KB>
# You can include up to 16 entries of SDS_TEMPDBS to
# specify additional dbspaces. Valid for SDS.
# SDS_PAGING - The paths of two buffer paging files,
# Separated by a comma. Valid for SDS only.
# UPDATABLE_SECONDARY - Controls whether secondary servers can accept
# update, insert, and delete operations from clients.
# If enabled, specifies the number of connection
# threads between the secondary and primary servers
# for transmitting updates from the secondary.
# Acceptable values are:
# 0 Secondary server is read-only (default)
# 1 through twice the number of CPU VPs, threads
# for performing updates from the secondary.
# Valid for HDR, SDS, and RSS.
# FAILOVER_CALLBACK - Specifies the path and program name called when a
# secondary server transitions to a standard or
# primary server. Valid for HDR, SDS, and RSS.
# TEMPTAB_NOLOG - Controls the default logging mode for temporary
# tables that are explicitly created with the
# CREATE TEMP TABLE or SELECT INTO TEMP statements.
# Secondary servers must not have logged temporary
# tables. Acceptable values are:
# 0 Create temporary tables with logging enabled by
# default.
# 1 Create temporary tables without logging.
# Required to be set to 1 on HDR, RSS, and SDS
# secondary servers.
###################################################################
DRAUTO 0
DRINTERVAL 30
DRTIMEOUT 30
HA_ALIAS
DRLOSTFOUND /login/infown/log/systestdb/dr.lostfound
DRIDXAUTO 0
LOG_INDEX_BUILDS
SDS_ENABLE
SDS_TIMEOUT 20
SDS_TEMPDBS
SDS_PAGING
UPDATABLE_SECONDARY 0
FAILOVER_CALLBACK
TEMPTAB_NOLOG 0
###################################################################
# Logical Recovery Parameters
###################################################################
# ON_RECVRY_THREADS - The number of logical recovery threads that
# run in parallel during a warm restore.
# OFF_RECVRY_THREADS - The number of logical recovery threads used
# in a cold restore. Also, the number of
# threads used during fast recovery.
###################################################################
ON_RECVRY_THREADS 1
OFF_RECVRY_THREADS 10
###################################################################
# Diagnostic Dump Configuration Parameters
###################################################################
# DUMPDIR - The location Assertion Failure (AF) diagnostic
# files
# DUMPSHMEM - Controls shared memory dumps. Acceptable values
# are:
# 0 Disabled
# 1 Dump all shared memory
# 2 Exclude the buffer pool from the dump
# DUMPGCORE - Enables (1) or disables (0) whether IDS dumps a
# core using gcore
# DUMPCORE - Enables (1) or disables (0) whether IDS dumps a
# core after an AF
# DUMPCNT - The maximum number of shared memory dumps or
# core files for a single session
###################################################################
DUMPDIR /recyclebox/inf/dump/systestdb
DUMPSHMEM 1
DUMPGCORE 0
DUMPCORE 0
DUMPCNT 1
###################################################################
# Alarm Program Configuration Parameters
###################################################################
# ALARMPROGRAM - Specifies the alarm program to display event
# alarms. To enable automatic logical log backup,
# edit alarmprogram.sh and set BACKUPLOGS=Y.
# ALRM_ALL_EVENTS - Controls whether the alarm program runs for
# every event. Acceptable values are:
# 1 Logs only noteworthy events
# 2 Logs all events
# STORAGE_FULL_ALARM - <time interval in seconds>,<alarm severity>
# specifies in what interval:
# - a message will be printed to the online.log file
# - an alarm will be raised
# when
# - a dbspace becomes full
# (ISAM error -131)
# - a partition runs out of pages or extents
# (ISAM error -136)
# time interval = 0 : OFF
# severity = 0 : no alarm, only message
# SYSALARMPROGRAM - Specifies the system alarm program triggered
# when an AF occurs
###################################################################
ALARMPROGRAM $INFORMIXDIR/etc/alarmprogram.sh
ALRM_ALL_EVENTS 0
STORAGE_FULL_ALARM 600,3
SYSALARMPROGRAM $INFORMIXDIR/etc/evidence.sh
###################################################################
# RAS Configuration Parameters
###################################################################
# RAS_PLOG_SPEED - Technical Support diagnostic parameter.
# Do not change; automatically updated.
# RAS_LLOG_SPEED - Technical Support diagnostic parameter.
# Do not change; automatically updated.
###################################################################
RAS_PLOG_SPEED 0
RAS_LLOG_SPEED 4288
###################################################################
# Character Processing Configuration Parameter
###################################################################
# EILSEQ_COMPAT_MODE - Controls whether when processing characters,
# IDS checks if the characters are valid for
# the locale and returns error -202 if they are
# not. Acceptable values are:
# 0 Return an error for characters that are not
# valid (Default)
# 1 Allow characters that are not valid
####################################################################
EILSEQ_COMPAT_MODE 0
###################################################################
# Statistic Configuration Parameters
###################################################################
# QSTATS - Enables (1) or disables (0) the collection of queue
# statistics that can be viewed with onstat -g qst
# WSTATS - Enables (1) or disables (0) the collection of wait
# statistics that can be viewed with onstat -g wst
####################################################################
QSTATS 0
WSTATS 0
###################################################################
# Java Configuration Parameters
###################################################################
# VPCLASS jvp - Configures the Java VP. The format is:
# VPCLASS jvp,num=<#>[,max=<#>][,aff=<#>][,noage]
# JVPJAVAHOME - The JRE root directory
# JVPHOME - The Krakatoa installation directory
# JVPPROPFILE - The Java VP property file
# JVPLOGFILE - The Java VP log file
# JDKVERSION - The version of JDK supported by this server
# JVPJAVALIB - The location of the JRE libraries, relative
# to JVPJAVAHOME
# JVPJAVAVM - The JRE libraries to use for the Java VM
# JVPARGS - Configures the Java VM. To display JNI calls,
# use JVPARGS -verbose:jni. Separate options with
# semicolons.
# JVPCLASSPATH - The Java classpath to use. Use krakatoa_g.jar
# for debugging. Comment out the JVPCLASSPATH
# entry you do not want to use.
###################################################################
#VPCLASS jvp,num=1
JVPJAVAHOME $INFORMIXDIR/extend/krakatoa/jre
JVPHOME $INFORMIXDIR/extend/krakatoa
JVPPROPFILE $INFORMIXDIR/extend/krakatoa/.jvpprops
JVPLOGFILE $INFORMIXDIR/jvp.log
JDKVERSION 1.5
JVPJAVALIB /bin
JVPJAVAVM jvm
#JVPARGS -verbose:jni
#JVPCLASSPATH
$INFORMIXDIR/extend/krakatoa/krakatoa_g.jar:$INFORMIXDIR/extend/krakatoa/jdbc_g.jar
JVPCLASSPATH $INFORMIXDIR/extend/krakatoa/krakatoa.jar:$INFORMIXDIR/extend/krakatoa/jdbc.jar
###################################################################
# Buffer pool and LRU Configuration Parameters
###################################################################
# BUFFERPOOL - Specifies the default values for buffers and LRU
# queues in each buffer pool. Each page size used
# by a dbspace has a buffer pool and needs a
# BUFFERPOOL entry. The onconfig.std file contains
# two initial entries: a default entry from which
# to base new page size entries on, and an entry
# for the operating system default page size.
# When you add a dbspace with a different page size,
# IDS adds a BUFFERPOOL entry to the onconfig file
# with values that are the same as the default
# BUFFERPOOL entry, except that the default
# keyword is replaced by size=Nk, where N is the
# new page size. With interval checkpoints, these
# values can now be set higher than in previous
# versions of IDS in an OLTP environment.
# AUTO_LRU_TUNING - Enables (1) or disables (0) automatic tuning of
# LRU queues. When this parameter is enabled, IDS
# increases the LRU flushing if it cannot find low
# priority buffers for number page faults.
###################################################################
BUFFERPOOL default,buffers=10000,lrus=8,lru_min_dirty=50.000000,lru_max_dirty=60.500000
BUFFERPOOL size=4K,buffers=300000,lrus=16,lru_min_dirty=50.000000,lru_max_dirty=60.000000
AUTO_LRU_TUNING 1
Figure I: Informix Database ER Diagram and Queue and Tables:
Queue 71
Queue 31
Queue 51
ICCBillingUpload
Queue 10
Queue 70
Queue 32
ICCSetExpiryDate
Queue 41
Queue 21
Queue 81
CARRIER
Applications and Scripts
# @(#)20 1.9 src/bos/usr/sbin/cron/adm, cmdcntl, bos530 9/9/91 06:03:17
# IBM_PROLOG_BEGIN_TAG
# This is an automatically generated prolog.
#
# bos530 src/bos/usr/sbin/cron/adm 1.9
#
# Licensed Materials - Property of IBM
#
# (C) COPYRIGHT International Business Machines Corp. 1989,1991
# All Rights Reserved
#
# US Government Users Restricted Rights - Use, duplication or
# disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
#
# IBM_PROLOG_END_TAG
#
# COMPONENT_NAME: (CMDCNTL) commands needed for basic system needs
#
# FUNCTIONS:
#
# ORIGINS: 27,18
#
# (C) COPYRIGHT International Business Machines Corp. 1989,1991
# All Rights Reserved
# Licensed Materials - Property of IBM
#
# US Government Users Restricted Rights - Use, duplication or
# disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
#
#
#
#=================================================================
# SYSTEM ACTIVITY REPORTS
# 8am-5pm activity reports every 20 mins during weekdays.
# activity reports every an hour on Saturday and Sunday.
# 6pm-7am activity reports every an hour during weekdays.
# Daily summary prepared at 18:05.
#=================================================================
#0 8-17 * * 1-5 /usr/lib/sa/sa1 1200 3 &
#0 * * * 0,6 /usr/lib/sa/sa1 &
#0 18-7 * * 1-5 /usr/lib/sa/sa1 &
#5 18 * * 1-5 /usr/lib/sa/sa2 -s 8:00 -e 18:01 -i 3600 -ubcwyaqvm &
#=================================================================
# PROCESS ACCOUNTING:
# runacct at 11:10 every night
# dodisk at 11:00 every night
# ckpacct every hour on the hour
# monthly accounting 4:15 the first of every month
#=================================================================
#10 23 * * 0-6 /usr/lib/acct/runacct 2>/usr/adm/acct/nite/accterr > /dev/null
#0 23 * * 0-6 /usr/lib/acct/dodisk > /dev/null 2>&1
#0 * * * * /usr/lib/acct/ckpacct > /dev/null 2>&1
#15 4 1 * * /usr/lib/acct/monacct > /dev/null 2>&1
#=================================================================
#Gather System info;
1 0 * * 0 /home/dguo/sysinfo/conf/getinfo.ksh 1>/dev/null 2>&1
#Report
55 5 * * * /home/dguo/script/check_2020.ksh>/dev/null 2>&1
55 5 * * * /home/dguo/script/check_507norecap.ksh>/dev/null 2>&1
55 5 * * * /home/dguo/script/check_b3.ksh>/dev/null 2>&1
0 3 * * 0 /usr/esa/sbin/esa_awareness
# cleanup the log
###0 0 * * * /usr/apps/inf/bob/cleanuplog/cleanuplog.ksh > /usr/apps/inf/bob/cleanuplog/cleanuplog.out 2>&1
# loglist ierr files, and send the outpt files
0 9 * * * /usr/apps/inf/bob/loglist/checkiperr.ksh >> /usr/apps/inf/bob/loglist/checkiperr.out 2>&1
# compare the txn data between IP and Locus
# As per Esa, stop running below job;
###0 9 * * * /usr/apps/inf/bob/compareb3/compareb3.ksh >> /usr/apps/inf/bob/compareb3/compareb3.out
# update statistics
0 0 * * * /usr/apps/inf/bob/upstat/upstat.ksh >> /usr/apps/inf/bob/upstat/upstat.out 2>&1
#for performance;
#46 * * * * /home/informix/purgeslow/io.off/get_hourly.ksh >/dev/null
#46,16 * * * * /home/informix/purgeslow/io.off/get_half.ksh >/dev/null
#46,16 * * * * /home/informix/purgeslow/io.off/get_sys.ksh >/dev/null
#Monthly client_invoice purge;
#30 21 9 * * /usr/apps/inf/maintenance/invoice/inv_purge.ksh 1>/usr/apps/inf/maintenance/invoice/inv_purge.log 2>&1
#0 0 * * * cat /dev/null > /login/insqry/sqexplain.out
# purge download files
1 7 * * * /home/ipgown/purgebds/purgetbl.ksh >> /home/ipgown/purgebds/purgetbl.out 2>&1
5 7 * * * /home/ipgown/purgebds/purgefile.ksh >> /home/ipgown/purgebds/purgefile.out 2>&1
# start bds
###5 6 * * * ksh -c /usr/apps/ipg/ver001/srv/bds/pgm/ip_0p/start.ksh > /dev/null
# Restart Locus Services;
30 3 * * 0 /usr/apps/ipg/ver001/srv/locus/loc_restart.ksh 1>/usr/apps/ipg/ver001/srv/locus/restart.out 2>&1
################################################################
# Monitor Livingston AIX Errors Utility
################################################################
0 5,15 * * * ksh ~/scripts/sendMail/errMail >/dev/null 2>&1
#
################################################################
# Monitor Livingston AIX Performance Utility
################################################################
10 * * * * ksh ~/scripts/sendMail/perfMail > /dev/null 2>&1
#
################################################################
# Collect Informix SQL
################################################################
15 * * * * ksh ~/scripts/sendMail/infMail ipdb > /dev/null 2>&1
#
################################################################
# Collect Java Data Load Logs
################################################################
25 23 * * * ksh ~/scripts/sendMail/insLogMail > /dev/null 2>&1
#
###############################################################
# Monitor Storage usage
##############################################################
30 23 * * 1-6 ksh ~/scripts/sendMail/stgMail ipdb Daily > /dev/null 2>&1
30 23 * * 0 ksh ~/scripts/sendMail/stgMail ipdb Weekly > /dev/null 2>&1
30 23 1 * * ksh ~/scripts/sendMail/stgMail ipdb Monthly > /dev/null 2>&1
# @(#)08 1.15.1.3 src/bos/usr/sbin/cron/root, cmdcntl, bos530 2/11/94 17:19:47
# IBM_PROLOG_BEGIN_TAG
# This is an automatically generated prolog.
#
# bos530 src/bos/usr/sbin/cron/root 1.15.1.3
#
# Licensed Materials - Property of IBM
#
# (C) COPYRIGHT International Business Machines Corp. 1989,1994
# All Rights Reserved
#
# US Government Users Restricted Rights - Use, duplication or
# disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
#
# IBM_PROLOG_END_TAG
#
# COMPONENT_NAME: (CMDCNTL) commands needed for basic system needs
#
# FUNCTIONS:
#
# ORIGINS: 27
#
# (C) COPYRIGHT International Business Machines Corp. 1989,1994
# All Rights Reserved
# Licensed Materials - Property of IBM
#
# US Government Users Restricted Rights - Use, duplication or
# disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
#
#0 3 * * * /usr/sbin/skulker
#45 2 * * 0 /usr/lib/spell/compress
#45 23 * * * ulimit 5000; /usr/lib/smdemon.cleanu > /dev/null
# SSA warning : Deleting the next two lines may cause errors in redundant
# SSA warning : hardware to go undetected.
01 5 * * * /usr/lpp/diagnostics/bin/run_ssa_ela 1>/dev/null 2>/dev/null
0 * * * * /usr/lpp/diagnostics/bin/run_ssa_healthcheck 1>/dev/null 2>/dev/null
# SSA warning : Deleting the next line may allow enclosure hardware errors to go undetected
30 * * * * /usr/lpp/diagnostics/bin/run_ssa_encl_healthcheck 1>/dev/null 2>/dev/null
# SSA warning : Deleting the next line may allow link speed exceptions to go undetected
30 4 * * * /usr/lpp/diagnostics/bin/run_ssa_link_speed 1>/dev/null 2>/dev/null
0 11 * * * /usr/bin/errclear -d S,O 30
0 12 * * * /usr/bin/errclear -d H 90
0 15 * * * /usr/lib/ras/dumpcheck >/dev/null 2>&1
#########################################################################
# #
# IP Operations Environment #
# #
#########################################################################
#--> runner
* 6-7 * * * ksh /insight/local/scripts/runner.10.ksh >> /dmqjtmp/archiveRunnerLog/runner.10.out 2>&1
* 8-20 * * * ksh /insight/local/scripts/runner.all.ksh >> /dmqjtmp/archiveRunnerLog/runner.all.out 2>&1
20-40 22 * * * ksh /insight/local/scripts/runner.71.ksh >> /dmqjtmp/archiveRunnerLog/runner.71.out 2>&1
1,16,31,46 8-20 * * * /insight/local/scripts/iccdataupload/StartInsightUpload.ksh >>
/insight/local/scripts/iccdataupload/StartInsightUpload.out 2>&1
2,17,32,47 8-20 * * * /insight/local/scripts/ICCSetExpiryDates/StartInsightSetExpiryDates.ksh >>
/insight/local/scripts/ICCSetExpiryDates/StartInsightSetExpiryDates.out 2>&1
5 7-12 * * * /insight/local/scripts/ICCBillingUpload/StartInsightBillingUpload.ksh>>
/insight/local/scripts/ICCBillingUpload/StartInsightBillingUpload.out 2>&1
#--> monitor
###30 6 * * 1-5 ksh /insight/local/scripts/emailDog.ksh >> /dmqjtmp/archiveEdogLog/emailDog.out 2>&1
10 * * * 1-5 ksh /insight/local/scripts/alertDog.ksh >> /dmqjtmp/archiveAdogLog/alertDog.out 2>&1
30 * * * 1-5 ksh /insight/local/scripts/watchDog.ksh >> /dmqjtmp/archiveWdogLog/watchDog.out 2>&1
#--> archive log file directory
30 6 * * * ksh /insight/local/scripts/tuxLogClean.ksh > /dmqjtmp/archiveTuxLog/tuxLogClean.out 2>&1
30 23 * * * ksh /insight/local/scripts/oprLogClean.ksh > /dmqjtmp/archiveOprLog/oprLogClean.out 2>&1
30 23 * * * ksh /insight/local/scripts/betaLogClean.ksh > /dmqjtmp/archiveBetaLog/betaLogClean.out 2>&1
5 4 * * * ksh /insight/local/scripts/bdsLogClean.ksh > /dmqjtmp/archiveBdsLog/bdsLogClean.out 2>&1
#--> clean up the log and ptr files
30 6 * * * ksh /insight/local/scripts/ptrLogClean.ksh > /dmqjtmp/ptrCleanLog/ptrLogClean.out 2>&1
30 6 * * * ksh /insight/local/scripts/bkupLogClean.ksh > /dmqjtmp/bkupCleanLog/bkupLogClean.out 2>&1
30 6 * * * ksh /insight/local/scripts/sqexplainClean.ksh >/dev/null 2>&1
#--> system and application backup
#30,33,36,39,42,45,48,51,54,57 23 * * 6 /home/dguo/script/check_systape.ksh > /dev/null 2>&1
5 23 * * 6 ksh -c "/insight/local/backup/sysbkup.ksh rmt0" >> /dmqjtmp/archiveSysbkupLog/sysbkup.out 2>&1
#30,33,36,39,42,45,48,51,54,57 1 * * 0,2-6 /home/dguo/script/check_apptape.ksh > /dev/null 2>&1
5 1 * * 0,2-6 ksh -c "/insight/local/backup/appbkup.ksh rmt0" >> /dmqjtmp/archiveAppbkupLog/appbkup.out 2>&1
#30,33,36,39,42,45,48,51,54,57 3 * * 0,2-6 /home/dguo/script/check_dbstape.ksh > /dev/null 2>&1
1 4 * * 0,2-6 ksh -c "/insight/local/backup/dbsbkup.ksh" >> /dmqjtmp/archiveDbsbkupLog/dbsbkup.out 2>&1
#--> Perl script funnel files
30 6 * * * /insight/local/scripts/getTxnRpt.pl > /dmqjtmp/archiveFfileLog/getTxnRpt.out 2>&1
#Daily cron job backup;
0 3 * * * /insight/local/scripts/cron_bkup.ksh > /dev/null/ 2>&1
#-->/sitemgr/b3_arch/run_autoarchive.ksh
#15 15 * * * /insight/local/b3_arch/run_autoarchive.ksh >> /dmqjtmp/archiveB3Log/monthly_archive.log 2>&1
###################################################################
0 0 * * 1 /insight/nmon/startnmon.ksh >> /insight/nmon/startnmon.out 2>&1
#0 0 * * 3-4 /admin/emcdr/iostat/iostat.ksh >> /admin/emcdr/iostat/iostat.out 2>&1
0,5,10,15,20,25,30,35,40,45,50,55 * * * * /usr/sbin/dumpctrl -k >/dev/null 2>/dev/null
0 0 * * * /opt/csm/bin/cfmupdatenode -a 1>/dev/null 2>/dev/null
0 0 * * * /opt/csm/csmbin/cleanup.logs.csp 1>>/var/log/csm/csperror.log 2>>/var/log/csm/csperror.log
55 23 * * * /var/perf/pm/bin/pmcfg >/dev/null 2>&1 #Enable PM Data Collection
59 23 * * * /var/perf/pm/bin/pmcfg -T >/dev/null 2>&1 #Enable PM Data Transmission
# @(#)08 1.15.1.3 src/bos/usr/sbin/cron/root, cmdcntl, bos530 2/11/94 17:19:47
# IBM_PROLOG_BEGIN_TAG
# This is an automatically generated prolog.
#
# bos530 src/bos/usr/sbin/cron/root 1.15.1.3
#
# Licensed Materials - Property of IBM
#
# (C) COPYRIGHT International Business Machines Corp. 1989,1994
# All Rights Reserved
#
# US Government Users Restricted Rights - Use, duplication or
# disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
#
# IBM_PROLOG_END_TAG
#
# COMPONENT_NAME: (CMDCNTL) commands needed for basic system needs
#
# FUNCTIONS:
#
# ORIGINS: 27
#
# (C) COPYRIGHT International Business Machines Corp. 1989,1994
# All Rights Reserved
# Licensed Materials - Property of IBM
#
# US Government Users Restricted Rights - Use, duplication or
# disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
#
#0 3 * * * /usr/sbin/skulker
#45 2 * * 0 /usr/lib/spell/compress
#45 23 * * * ulimit 5000; /usr/lib/smdemon.cleanu > /dev/null
# SSA warning : Deleting the next two lines may cause errors in redundant
# SSA warning : hardware to go undetected.
01 5 * * * /usr/lpp/diagnostics/bin/run_ssa_ela 1>/dev/null 2>/dev/null
0 * * * * /usr/lpp/diagnostics/bin/run_ssa_healthcheck 1>/dev/null 2>/dev/null
# SSA warning : Deleting the next line may allow enclosure hardware errors to go undetected
30 * * * * /usr/lpp/diagnostics/bin/run_ssa_encl_healthcheck 1>/dev/null 2>/dev/null
# SSA warning : Deleting the next line may allow link speed exceptions to go undetected
30 4 * * * /usr/lpp/diagnostics/bin/run_ssa_link_speed 1>/dev/null 2>/dev/null
0 11 * * * /usr/bin/errclear -d S,O 30
0 12 * * * /usr/bin/errclear -d H 90
0 15 * * * /usr/lib/ras/dumpcheck >/dev/null 2>&1
# @(#)09 1.6 src/bos/usr/sbin/cron/sys, cmdcntl, bos530 4/25/91 17:17:05
# IBM_PROLOG_BEGIN_TAG
# This is an automatically generated prolog.
#
# bos530 src/bos/usr/sbin/cron/sys 1.6
#
# Licensed Materials - Property of IBM
#
# (C) COPYRIGHT International Business Machines Corp. 1989,1991
# All Rights Reserved
#
# US Government Users Restricted Rights - Use, duplication or
# disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
#
# IBM_PROLOG_END_TAG
#
# COMPONENT_NAME: (CMDCNTL) commands needed for basic system needs
#
# FUNCTIONS:
#
# ORIGINS: 27,18
#
# (C) COPYRIGHT International Business Machines Corp. 1989,1991
# All Rights Reserved
# Licensed Materials - Property of IBM
#
# US Government Users Restricted Rights - Use, duplication or
# disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
#
#
# @(#)21 1.2 src/bos/usr/bin/uucp/uucron/uucp, cmduucp, bos610 10/8/90 09:34:47
# IBM_PROLOG_BEGIN_TAG
# This is an automatically generated prolog.
#
# bos610 src/bos/usr/bin/uucp/uucron/uucp 1.2
#
# Licensed Materials - Property of IBM
#
# COPYRIGHT International Business Machines Corp. 1985,1990
# All Rights Reserved
#
# US Government Users Restricted Rights - Use, duplication or
# disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
#
# IBM_PROLOG_END_TAG
#
# COMPONENT_NAME: UUCP uucp
#
# FUNCTIONS:
#
# ORIGINS: 10 27 3
#
# (C) COPYRIGHT International Business Machines Corp. 1985, 1989, 1990
# All Rights Reserved
# Licensed Materials - Property of IBM
#
# US Government Users Restricted Rights - Use, duplication or
# disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
#
#
# 20,50 * * * * /bin/bsh -c "/usr/lib/uucp/uudemon.poll > /dev/null"
# 25,55 * * * * /bin/bsh -c "/usr/lib/uucp/uudemon.hour > /dev/null"
# 45 23 * * * /bin/bsh -c "/usr/lib/uucp/uudemon.cleanu > /dev/null"
# 48 8,12,16 * * * /bin/bsh -c "/usr/lib/uucp/uudemon.admin > /dev/null
USAGE: col.cont <input file>
Start Collect content of scripts listed in file SCRIPTS
#####################################################################
# Script /home/dguo/sysinfo/conf/getinfo.ksh
#####################################################################
#####################################################################
# Script /home/dguo/script/check_2020.ksh
#####################################################################
#!/usr/bin/ksh
#######################################################
# Description:
# Used to check b3 table based on Esa's request
#
# Author:
#
#
#######################################################
HomeDir=/home/dguo
ReportDir=$HomeDir/report
OutFile=$ReportDir/check_2020.out
cd /home/dguo
. ./ids115.env ipdb
dbaccess ip_0p@ipdb - <<EOF 1>$OutFile 2>&1
set isolation to dirty read;
select liibrchno,liirefno,status,k84date,approveddate
from b3
where approveddate like '2020%'
order by k84date;
EOF
# and b3.createdate like "2006/%");
mail -s "approveddate like 2020% " ekotsalainen@livingstonintl.com<$OutFile
#mail -s "approveddate like 2020% " dguo@livingstonintl.com<$OutFile
#rm $outfile
#####################################################################
# Script /home/dguo/script/check_507norecap.ksh
#####################################################################
#!/usr/bin/ksh
##############################################################
# Description:
# Used to check transaction has 507 status but has no recap
#
# Author:
# Denny Guo
#
###############################################################
HomeDir=/home/dguo
ReportDir=$HomeDir/report
OutFile=$ReportDir/507_norecap.out
cd /home/dguo
. ./ids115.env ipdb
dbaccess ip_0p@ipdb - <<EOF 1>$OutFile 2>&1
set isolation to dirty read;
select b3.b3iid,b3.liibrchno,b3.liirefno,b3.status,
b3.approveddate,b3.k84date
from b3
where b3.status = 507
and b3.k84date <> "1753/01/01 00:00:00"
and b3.b3iid not in
( select b3iid from b3_subheader);
EOF
mail -s "507 status without recap @ `date`" ekotsalainen@livingstonintl.com<$OutFile
#mail -s "507 status without recap @ `date`" dguo@livingstonintl.com<$OutFile
#rm $outfile
#####################################################################
# Script /home/dguo/script/check_b3.ksh
#####################################################################
#!/usr/bin/ksh
#######################################################
# Description:
# Used to check b3 table based on Esa's request
#
# Author:
# Denny Guo
#
#######################################################
HomeDir=/home/dguo
ReportDir=$HomeDir/report
OutFile=$ReportDir/check_b3.out
cd /home/dguo
. ./ids115.env ipdb
#dbaccess ip_0p@ipdb - <<EOF 1>$OutFile 2>&1
#set isolation to dirty read;
#select * from b3 where b3iid = '1111111';
#EOF
dbaccess ip_0p@ipdb - <<EOF 1>$OutFile 2>&1
set isolation to dirty read;
select b3.b3iid,b3.liibrchno,b3.liirefno,b3.createdate,
b3.k84date,b3.approveddate,
b3.status b3_status,status_history.status history_status,
status_history.statusdate
from b3,status_history
where (b3.status <> 507
and b3.b3iid = status_history.b3iid
and status_history.status = 507 );
EOF
mail -s "B3 Status Information" ekotsalainen@livingstonintl.com<$OutFile
#mail -s "B3 Status Information" dguo@livingstonintl.com<$OutFile
#rm $outfile
#####################################################################
# Script /usr/apps/inf/bob/cleanuplog/cleanuplog.ksh
#####################################################################
#!/bin/ksh
###################################################################
#
# cleanuplog.ksh
#
# cleanup the log file for compareb3 and loglist
#
###################################################################
set -x
date
find /usr/apps/inf/bob/loglist -name "*.xls" -type f -mtime +7 -exec /usr/bin/rm -f {} \; \
> /dev/null 2>&1 ;
find /usr/apps/inf/bob/loglist -name "*.out" -type f -mtime +7 -exec /usr/bin/rm -f {} \; \
> /dev/null 2>&1 ;
#find /usr/apps/inf/bob/compareb3 -name "*b3i*" -type f -mtime +7 -exec /usr/bin/rm -f {} \; \
> /dev/null 2>&1 ;
#find /usr/apps/inf/bob/compareb3 -name "*mis*" -type f -mtime +7 -exec /usr/bin/rm -f {} \; \
> /dev/null 2>&1 ;
#find /usr/apps/inf/bob/compareb3 -name "*rpt*" -type f -mtime +7 -exec /usr/bin/rm -f {} \; \
> /dev/null 2>&1 ;
exit 0
#####################################################################
# Script /usr/apps/inf/bob/loglist/checkiperr.ksh
#####################################################################
#!/bin/ksh
#
# Author: Bob Chong
# Date: Sept 18, 2002
# Purpose: check TCL error
#
set -x
export INFORMIXSERVER=ipdb
#export INFORMIXDIR=/usr/apps/inf/ver731UD10
export INFORMIXDIR=/usr/apps/inf/ver115UC3
export PATH=$PATH:$INFORMIXDIR/bin:/usr/apps/ipg/ver001/util:/usr/apps/inf/bob/loglist
TclerrDir=/dmqjtmp/archiveTclerrLog
# get yesterday date
runyy=$(get_day 1 | cut -d - -f 1)
runmm=$(get_day 1 | cut -d - -f 2)
rundd=$(get_day 1 | cut -d - -f 3)
rundate=${runyy}${runmm}${rundd}
#create Target Directory;
mkdir ${TclerrDir}/${rundate}
date
checkerr10.ksh
checkerr21.ksh
checkerr31.ksh
checkerr32.ksh
checkerr34.ksh
checkerr41.ksh
#checkerr46.ksh
checkerr51.ksh
checkerr52.ksh
#checkerr61.ksh
checkerr70.ksh
checkerr71.ksh
checkerr81.ksh
#remove old logs older than 3 days;
# get yesterday date
oldyy=$(get_day 4 | cut -d - -f 1)
oldmm=$(get_day 4 | cut -d - -f 2)
olddd=$(get_day 4 | cut -d - -f 3)
olddate=${oldyy}${oldmm}${olddd}
rm -r ${TclerrDir}/${olddate}
exit 0
#####################################################################
# Script /usr/apps/inf/bob/compareb3/compareb3.ksh
#####################################################################
#!/bin/ksh
#
# Date: 10-07-2000
# Purpose: compare B3 data between Locus and IP
# Usage: compareb3.ksh <yyyymmdd>
# By default, the process date is yesterday
#
set -x
export INFORMIXDIR=/usr/apps/inf/ver731UD10
export INFORMIXSERVER=ipdb
export PATH=./:$PATH:$INFORMIXDIR/bin:/usr/apps/ipg/ver001/util:
# get a processed date
if [[ $# -eq 0 ]]; then
runyy=`get_day 1 | cut -d - -f 1 | cut -c 3-4`
runmm=`get_day 1 | cut -d - -f 2`
rundd=`get_day 1 | cut -d - -f 3`
else
echo "\nProcessed Date (yyyymmdd) is: $1\nPlease 'y' Key to continue ... \c"
read res
case $res in
y|Y)
#echo "\n\t\t Continue with program...."
runyy=`echo $1 |cut -c 3-4`
runmm=`echo $1 |cut -c 5-6`
rundd=`echo $1 |cut -c 7-8`
;;
*) echo "\nUsage: compareb3.ksh <yyyymmdd>"
exit 1
;;
esac
fi
# set up the variables and goto working directory
rundate=${runyy}${runmm}${rundd}
cd /usr/apps/inf/bob/compareb3
compareB3()
{
# for b3 table
b3_inf=b3i.${rundate}
b3_mis=b3_mis.20${rundate}
b3_rpt=b3_rpt.20${rundate}
# rcp then dbcomp then rcp
echo $b3_inf $b3_mis $b3_rpt
rcp "informix@bellat:$b3_inf" $b3_inf
dbcomp ip_0p b3 $b3_inf $b3_mis > $b3_rpt
rcp $b3_mis "informix@bellat:$b3_mis"
rcp $b3_rpt "informix@bellat:$b3_rpt"
}
compareB3Details()
{
# for recap tables
b3recap_inf=recap.20${rundate}
b3recap_mis=recap_mis.20${rundate}
b3recap_rpt=recap_rpt.20${rundate}
# rcp then b3checksub then rcp
echo $b3recap_inf $b3recap_mis $b3recap_rpt
rcp "informix@bellat:$b3recap_inf" $b3recap_inf
b3checksub ip_0p $b3recap_inf $b3recap_mis > $b3recap_rpt
rcp $compareB3Dir/$b3recap_mis "informix@bellat:$b3recap_mis"
rcp $compareB3Dir/$b3recap_rpt "informix@bellat:$b3recap_rpt"
}
compareOpn()
{
# for client invoice table
opn_inf=opn.20${rundate}
opn_mis=opn_mis.20${rundate}
opn_rpt=opn_rpt.20${rundate}
# rcp then dbcomp then rcp
echo $opn_inf $opn_mis $opn_rpt
rcp "informix@bellat:$opn_inf" $opn_inf
dbcomp ip_0p client_invoice $opn_inf $opn_mis > $opn_rpt
rcp $compareB3Dir/$opn_mis "informix@bellat:$opn_mis"
rcp $compareB3Dir/$opn_rpt "informix@bellat:$opn_rpt"
}
cleanUp()
{
/usr/bin/find /usr/apps/inf/bob/compareb3 -name "*.20*" \
-type f -mtime +3 -exec /usr/bin/rm -f {} \; >/dev/null 2>&1
}
# main
compareB3
compareB3Details
compareOpn
cleanUp
exit 0
#####################################################################
# Script /usr/apps/inf/bob/upstat/upstat.ksh
#####################################################################
#!/bin/ksh
#
# purpose: run update statistics medium
#
export INFORMIXDIR=/usr/apps/inf/ver115UC3
export INFORMIXSERVER=ipdb
export PATH=$INFORMIXDIR/bin:$PATH
SQLDIR=/usr/apps/inf/bob/upstat
echo
date
time dbaccess < $SQLDIR/tbls_med.sql > $SQLDIR/tbls_med.out 2>&1
time dbaccess < $SQLDIR/tbls_high.sql > $SQLDIR/tbls_high.out 2>&1
time dbaccess < $SQLDIR/proc.sql > $SQLDIR/proc.out 2>&1
exit 0
#####################################################################
# Script /home/informix/purgeslow/io.off/get_hourly.ksh
#####################################################################
#!/usr/bin/ksh
cd $HOME
. ./ids115.env systestdb
cd $HOME/purgeslow/io.off
date >> stat-a.out
echo "----------------------------------------" >> stat-a.out
onstat -a >> stat-a.out
echo >> stat-a.out
echo >> stat-a.out
echo >> stat-a.out
echo "----------------------------------------" >> stat-a.out
echo >> stat-a.out
echo >> stat-a.out
#####################################################################
# Script /home/informix/purgeslow/io.off/get_half.ksh
#####################################################################
#!/usr/bin/ksh
cd $HOME
. ./ids115.env systestdb
cd $HOME/purgeslow/io.off
i=0
while [[ $i -lt 3 ]]
do
date >> stat-half.out
echo "----------------------------------------" >> stat-half.out
onstat -g glo >> stat-half.out
echo >> stat-half.out
onstat -g cpu >> stat-half.out
echo >> stat-half.out
onstat -g stk all >> stat-half.out
echo >> stat-half.out
onstat -g ath >> stat-half.out
echo >> stat-half.out
onstat -g ppf >> stat-half.out
echo >> stat-half.out
onstat -g rea >> stat-half.out
echo >> stat-half.out
onstat -g ioa >> stat-half.out
echo >> stat-half.out
echo "----------------------------------------" >> stat-half.out
i=$(($i+1))
echo >> stat-half.out
echo >> stat-half.out
done
#####################################################################
# Script /home/informix/purgeslow/io.off/get_sys.ksh
#####################################################################
#!/usr/bin/ksh
cd $HOME
cd $HOME/purgeslow/io.off
date >> sys.out
echo "----------------------------------------" >> sys.out
vmstat 3 5 >> sys.out
echo >>sys.out
lparstat 1 15 >> sys.out
echo >>sys.out
mpstat >> sys.out
echo >>sys.out
mpstat -d >> sys.out
echo >>sys.out
echo "----------------------------------------" >> sys.out
echo >>sys.out
echo >>sys.out
echo >>sys.out
#####################################################################
# Script /usr/apps/inf/maintenance/invoice/inv_purge.ksh
#####################################################################
#!/bin/ksh
INFORMIXSERVER=ipdb
INFORMIXDIR=/usr/apps/inf/ver115UC3
GL_DATETIME="%iY/%m/%d %H:%M:%S"
PATH=$INFORMIXDIR/bin:$PATH
export INFORMIXSERVER INFORMIXDIR GL_DATETIME PATH
cd /usr/apps/inf/maintenance/invoice
time dbaccess ip_0p < ./inv_purge.sql > ./inv_purge.out 2>&1
echo "DONE !!!\n"
mail -s "Monthly Invoice Purge Done @ `date`." dguo@livingstonintl.com <inv_purge.log
exit 0
#####################################################################
# Script /home/ipgown/purgebds/purgetbl.ksh
#####################################################################
#!/usr/bin/ksh
#######################################################################
# The script will create a SQL file to purge "t" tables and "t" records
# older than 2 days. And it will remove the tables and rows.
# Author: Bob Chong
# Date: Nov 1, 2001
########################################################################
umask 0000
INFORMIXDIR=/usr/apps/inf/ver115UC3
INFORMIXSERVER="ipdb"
PATH=$INFORMIXDIR/bin:$PATH
export INFORMIXDIR INFORMIXSERVER PATH
local_dir=/home/ipgown/purgebds
job_file=/home/ipgown/purgebds/joblist.txt
sql_file=/home/ipgown/purgebds/purgetbl.sql
out_file=/home/ipgown/purgebds/outlist.txt
cd $local_dir
$INFORMIXDIR/bin/dbaccess ip_0p@ipdb - << EOF > $job_file 2>&1
select "#JOB#", tabname[1,20], tabid
from systables
where tabname matches 't[0-9][0-9][0-9][0-9]*'
and (today - created) > 1
order by tabname;
EOF
grep '\#JOB\#' $job_file | while read JOB BATCH_ID TABID;
do
echo "drop table $BATCH_ID;" >> $sql_file
echo "delete from srch_crit_batch where tablename=\""$BATCH_ID"\";" >> $sql_file
echo "delete from bat_info where qms_id=\""$BATCH_ID"\";" >> $sql_file
done
if [[ -a $sql_file ]]
then
$INFORMIXDIR/bin/dbaccess ip_0p@ipdb < $sql_file > $out_file 2>&1
#rm $sql_file
#rm $out_file
fi
#rm $job_file
exit 0
#####################################################################
# Script /home/ipgown/purgebds/purgefile.ksh
#####################################################################
#!/bin/ksh
#####################################################
# The script will remove all the files older than
# 2 days under /netins directory
# Author: Bob Chong
# Date: Nov 1, 2001
#####################################################
# cleanup "t" files and "link" files in /netins
/usr/bin/find /netins -name "t[0-9]*.txt" -type f -mtime +1 -exec rm -f {} \; > /dev/null 2>&1
/usr/bin/find /netins -type l -mtime +1 -exec rm -f {} \; > /dev/null 2>&1
exit 0
#####################################################################
# Script /usr/apps/ipg/ver001/srv/bds/pgm/ip_0p/start.ksh
#####################################################################
#!/bin/ksh
#
# point to the right environments
cd /usr/apps/ipg/ver001/srv/insight
. ./setenv.insight
# change to apps directory and run bds and encrypt
cd /usr/apps/ipg/ver001/srv/bds/pgm/ip_0p
nohup ./qlistener > ./qlistener.out 2>&1 &
nohup ./bds ip_0p > ./bds.out 2>&1 &
nohup ./encrypt ip_0p > ./encrypt.out 2>&1 &
echo "All download Process have been started, please verify it ...\n"
#####################################################################
# Script /usr/apps/ipg/ver001/srv/locus/loc_restart.ksh
#####################################################################
#!/usr/bin/ksh
########################################################
## Purpose: Used to restart LOCUS services weekly; ##
## Author: ##
## Date: Auguest 1, 2007 ##
########################################################
cd /usr/apps/ipg/ver001/srv/locus
. ./setenv.locus
echo "************ `date` ***********\n"
ps -ef|grep -v "grep"|grep tcl|grep -v "com.ibm.lwi.LaunchLWI"
if [[ $? -eq 0 ]]
then
echo "Tcl is still running, it is unusual ...\n"
echo "We should not restart LOCUS services ....\n"
echo "Tcl should not be running at this time ... `date`" \
| mail -s "Tcl is running unusual. Please verify!" dguo@livingstonintl.com
else
echo "Tcl is not running .....\n"
echo "Shutdown locus services in progress .....\n\n\n"
tmshutdown -y
sleep 180
echo "\n\n==============================================================\n"
echo "Verify whether locus service has been shutdown completely ....\n"
ps -ef|grep locus|grep -v "grep"|grep -v "loc_restart"
if [[ $? -eq 0 ]]
then
echo "Locus shutdown is not completed ...\n"
echo "Quit restart procedure and notify Administrator ....\n"
echo "==============================================================\n"
echo "Please Call UNIX admin ASAP !!!" \
| mail -s "Urgent : Error with shutdown Locus Services !" dguo@livingstonintl.com
echo "Please Call UNIX admin ASAP !!!" \
| mail -s "Urgent : Error with shutdown Locus Services !" computerops@livingstonintl.com
else
echo "Locus services shutdown has completed.\n"
echo "We can restart all the services now.\n"
echo "==============================================================\n\n"
echo "************ `date` ***********\n"
echo "... no locus service is running ...\n"
echo "... restart services in progress ...\n"
tmboot -y
echo "\n\n============================================================\n"
echo "Verify locus services ....\n"
count=`ps -ef|grep locus|grep -v "grep"|grep -v "loc_restart"|wc -l`
if [[ $count -eq 16 ]]
then
echo "--- Locus services has been restarted without any issue. ---\n"
echo "============================================================\n"
else
echo "We have difficulty to restart Locus services ...\n"
echo "Quit restart procedure and notify Administrator ....\n"
echo "============================================================\n"
echo "Please Call UNIX admin ASAP !!!" \
| mail -s "Urgent : Error with restart Locus Services !" dguo@livingstonintl.com
echo "Please Call UNIX admin ASAP !!!" \
| mail -s "Urgent : Error with restart Locus Services !" computerops@livingstonintl.com
fi
fi
fi
mail -s "Locus Services Restart Report @ `date`" dguo@livingstonintl.com < restart.out
#####################################################################
# Script /insight/local/scripts/runner.10.ksh
#####################################################################
#!/bin/ksh
#####################################################################
#
# Name: runner.all.ksh
#
# Reference: n/a
#
# Description: Runner for the Reader
# Processing all queues file;
#
# Parameters: None
#
# Modification History:
#
# Date Name Description
# ------------------------------------------------
# 2002-10-01 Bob Chong Original
#
#####################################################################
set -v
set -x
typeset -ft rdrIsAvailable rdrStart
typeset -fu rdrIsAvailable rdrStart
# DMQ environment variable
Bus=11
Group=34
# Queue number
QueList="10"
#Queue 46 been disabled due to no feed from LOCUS
#and Q46 reader also disabled on LOCUS since 1998;
#QueList="10 21 22 31 32 34 41 46 51 52 70 81"
DmqJtmp=/dmqjtmp
RdrVaxFileDir=$DmqJtmp/rcp
RdrVaxFileArDir=$RdrVaxFileDir/done
DmqVaxHome=/dmqjtmp/dmqvax
RdrVaxTokenDir=$DmqVaxHome/token
RdrVaxTokenArDir=$DmqVaxHome/tokendone
RdrVaxTokenErrDir=$DmqVaxHome/tokenerror
Log=/dmqjtmp/archiveRunnerLog/runner.log
RefFile=/dmqjtmp/archiveRunnerLog/runner.ref
AppsDir=/usr/apps
BetaDir=$AppsDir/dmq/beta
aixsupport="lchen@livingstonintl.com bchong@livingstonintl.com dguo@livingstonintl.com
tward@livingstonintl.com"
Tcl=./tcl
RdrTraceLevel=3
#RdrTraceLevel=4
msgLog(){
print `date` "$1" >> $Log
}
rdrStart(){
set -x
typeset q=$1 f=$2
typeset betaEnv=dmqbeta own=bbois
typeset workFile=$BetaDir/$Bus$Group${q}.DMQIN
msgLog "Processing $q now"
ln -sf $f $workFile
chmod 644 $f
su - $own >>/dev/null 2>&1 <<%
cd /usr/apps/ipg/ver001/srv/locus
. setenv.locus
cd /usr/apps/dmq/beta
export DMQ_FULL_AUDIT_$q=FILE
$Tcl $q $RdrTraceLevel &
%
return 0
}
GetNextFile(){
set -x
typeset t f
integer size=0
# get the first one 70.1.vax 70.2.vax 70.3.vax
for i
do
t=$RdrVaxTokenDir/$i
f=$RdrVaxFileDir/$i
[ -f $f ] || {
msgLog "Panic: Missing file for token $i ???"
mv $t $RdrVaxTokenErrDir
continue
}
[ -s $f ] || {
msgLog "Panic: Empty file for token $i ???"
mv $t $RdrVaxTokenErrDir
mv $f $RdrVaxFileArDir
continue
}
((size=$(ls -l $f | awk '{ print $5 }')))
msgLog "Processing $i size = ${size} ready "
sleep 4 #Added due to processing issue;
chmod 644 $f
mv $t $RdrVaxTokenArDir
mv $f $RdrVaxFileArDir
print $RdrVaxFileArDir/$i
return 0
done
return 1
}
timeRefFileCreate(){
set -x
typeset q=${1-??}
integer x=${2-5}
typeset ref=/usr/apps/dmq/beta/xtime.0$q
integer m=$(date +%m) d=$(date +%d) h=$(date +%H) n=$(date +%M)
if (( $n >= 5 ))
then
((n=n-x))
else
return 0
fi
typeset -Z2 m d h n
touch -t ${m}${d}${h}${n} $ref
print $ref
}
rdrIsActive(){
set -x
typeset q=${1-??}
typeset min=${2-5}
typeset dir=${3-$BetaDir}
typeset ref=$(timeRefFileCreate $q $min)
typeset rdLog=dmqlog.0$q
typeset list
# if dmqlog.0?? file is not updated in 5 mins, then stop
find $dir -type f -newer ${ref} -print | grep "dmqlog.0$q" > /dev/null 2>&1
}
rdrIsUp(){
set -x
typeset q=$1
# if current reader not found, then prepare to process the current queue
#ps -ef | grep "[0-9] $Tcl $q $RdrTraceLevel" > /dev/null 2>&1
ps -ef | grep "$Tcl $q $RdrTraceLevel" > /dev/null 2>&1
}
rdrIsAvailable(){
set -x
typeset q=$1
# is reader running
rdrIsUp $q || {
msgLog "Reader $q is not running"
return 0
}
msgLog "Reader $q is running"
# is reader working
rdrIsActive $q && {
msgLog "Reader $q is running and working"
return 1
}
msgLog "Reader $q is running but not working"
return 1
}
lnName(){
set -x
typeset lnk=$1 nameLong nameShort
[ -L $lnk ] || return 1
nameLong=$(ls -l $lnk | awk '{print $11}')
nameShort=${nameLong##*/}
print $nameShort
}
fileSize(){
set -x
typeset f i
integer size s
for i
do
f=$RdrVaxFileDir/$i
s=$(ls -l $f 2>/dev/null | awk '{ print $5 }')
size=size+s
done
print $size
}
tokenList(){
set -x
typeset q=$1
typeset pat=${q}*.vax
typeset list
# no token ( 0 ) means false ( 1 ); otherwise print the files
set -A list $(cd $RdrVaxTokenDir; ls $pat 2> /dev/null)
((${#list[*]})) && print ${list[*]}
}
tokenSetLast(){
set -x
typeset q=$1
typeset Last=$RdrVaxTokenDir/Last.$q # point to the last token file
typeset list name f
# no token file then goto next queue
set -A list $(tokenList $q)
((${#list[*]})) || {
msgLog "No token file for $q"
return 1
}
msgLog "Reader $q total file = ${#list[*]} total size = $(fileSize ${list[*]})"
# set the last token file
integer n=$((${#list[*]}-1))
name=${list[n]}
# same file then process next file
[ "$(lnName $Last)" = $name ] && {
msgLog "Reader $q no new file arrived "
return 0
}
# link the last file
f=$RdrVaxFileDir/$name
ln -sf $f $Last
msgLog "Reader $q received $name "
}
### processing each queue transaction
rdrProc(){
set -x
typeset q=$1
typeset Cur=$DmqVaxHome/Cur.$q # point to the current file
typeset Last list f
echo "\n == Handling the Queue in detail for Queue : $q @ `date` ==\n"
# set the last token file received
echo " # set the last token file received ...."
tokenSetLast $q || {
msgLog "Goto next queue"
return 0
}
# is reader available
echo " Verify whether reader is available ..."
rdrIsAvailable $q || {
msgLog "Reader $q is not available"
return 0
}
# reader is available: let log the end of work, set Cur, Last Ptr
echo "# reader is available: let log the end of work, set Cur, Last Ptr ..."
[ -L $Cur ] && {
Last=$DmqVaxHome/Last.$q
mv $Cur $Last
msgLog "Reader $q file = $(lnName $Last) done"
}
# check for more to-do
set -A list $(tokenList $q)
((${#list[*]})) || return 0
# get the next file
print "allnextfile=${list[*]}"
f=$(GetNextFile ${list[*]})
print "nextfile=$f"
[ "$f" ] || return 0
# set the current file
ln -sf $f $Cur
# start the reader
echo "\n===== Start the actual reader @ `date`====\n"
rdrStart $q $f
}
### check the environments and subsystems
tuxIsUp(){
set -x
# check the tuxedo instance mode
cd /usr/apps/ipg/ver001/srv/locus
. ./setenv.locus
# bchong 2003/07/15 ulog error 557
# tuxpsr.ksh > /dev/null 2>&1
}
infIsUp(){
set -x
typeset db=ipdb
typeset dbstate=On-Line
# check the database instance mode
cd /home/informix
. ./setenv.inf ipdb
currstate=`onstat -u | grep Informix | awk '{ print $8 }'`
[ $dbstate = $currstate ]
}
locsrvCnt(){
set -x
typeset locCnt0=10
# check the locsrv process
locCnt1=`ps -ef | grep locsrv | grep locus | wc -l`
(( $locCnt0 == $locCnt1 ))
}
fsUsedIs(){
set -x
typeset fs="$1"
typeset pct="$2"
# check filesystem /usr/apps usage
df $fs | awk '{ print $4 }' | tail -1 | egrep "$pct" > /dev/null 2>&1
}
### main program starts from here;
# check the control reference file
echo "check the control reference file ..."
[ -f $RefFile ] || {
set -x
msgLog "Error: no control file"
exit 1
}
# check the /usr/apps filesystem usage
echo "check the /usr/apps filesystem usage ...."
fsUsedIs $AppsDir "(9[0-9]|100)" && {
set -x
msgLog "Error: File System $AppsDir is full"
mail -s "$AppsDir File System Space Low!!!" $aixsupport </dev/null
exit 1
}
# check the /dmqjtmp filesystem usage
echo "check the /dmqjtmp filesystem usage ..."
fsUsedIs $DmqJtmp "(9[0-9]|100)" && {
set -x
msgLog "Error: File System $DmqJtmp is full"
mail -s "$DmqJtmp File System Space Low!!!" $aixsupport </dev/null
exit 1
}
# check locsrv server process
echo "check locsrv server process ...."
locsrvCnt || {
set -x
msgLog "Error: Locsrv server process is wrong"
mail -s "Locsrv Server process is wrong!!!" $aixsupport </dev/null
exit 1
}
# check informix is up
echo"# check informix is up..."
infIsUp || {
set -x
msgLog "Error: Informix is not up"
mail -s "Informix Server is not up!!!" $aixsupport </dev/null
exit 1
}
# check tuxedo is up
echo " # check tuxedo is up ..."
tuxIsUp || {
set -x
msgLog "Error: Tuxedo is not up"
mail -s "Tuxedo Server is not up!!!" $aixsupport </dev/null
exit 1
}
[ -d $RdrVaxTokenDir ] || mkdir $RdrVaxTokenDir
[ -d $RdrVaxTokenArDir ] || mkdir $RdrVaxTokenArDir
[ -d $RdrVaxTokenErrDir ] || mkdir $RdrVaxTokenErrDir
msgLog "\n<--- RUNNER opening --->"
for i in $QueList
do
msgLog "Reader $i start ###"
echo "\n ====== Start to handle Queue : $i @ `date` ======\n"
rdrProc $i
msgLog "Reader $i finish ###"
done
msgLog "<--- RUNNER closing --->\n"
touch $RefFile
exit 0
#####################################################################
# Script /insight/local/scripts/runner.all.ksh
#####################################################################
#!/bin/ksh
#####################################################################
#
# Name: runner.all.ksh
#
# Reference: n/a
#
# Description: Runner for the Reader
# Processing all queues file;
#
# Parameters: None
#
# Modification History:
#
# Date Name Description
# ------------------------------------------------
# 2002-10-01 Bob Chong Original
#
#####################################################################
set -v
set -x
date
typeset -ft rdrIsAvailable rdrStart
typeset -fu rdrIsAvailable rdrStart
# DMQ environment variable
Bus=11
Group=34
# Queue number
QueList="10 21 22 31 32 34 41 51 52 70 81"
#Queue 46 been disabled due to no feed from LOCUS
#and Q46 reader also disabled on LOCUS since 1998;
#QueList="10 21 22 31 32 34 41 46 51 52 70 81"
DmqJtmp=/dmqjtmp
RdrVaxFileDir=$DmqJtmp/rcp
RdrVaxFileArDir=$RdrVaxFileDir/done
DmqVaxHome=/dmqjtmp/dmqvax
RdrVaxTokenDir=$DmqVaxHome/token
RdrVaxTokenArDir=$DmqVaxHome/tokendone
RdrVaxTokenErrDir=$DmqVaxHome/tokenerror
Log=/dmqjtmp/archiveRunnerLog/runner.log
RefFile=/dmqjtmp/archiveRunnerLog/runner.ref
AppsDir=/usr/apps
BetaDir=$AppsDir/dmq/beta
aixsupport="lchen@livingstonintl.com bchong@livingstonintl.com dguo@livingstonintl.com
tward@livingstonintl.com"
Tcl=./tcl
RdrTraceLevel=3
#RdrTraceLevel=4
msgLog(){
print `date` "$1" >> $Log
}
rdrStart(){
set -x
typeset q=$1 f=$2
typeset betaEnv=dmqbeta own=bbois
typeset workFile=$BetaDir/$Bus$Group${q}.DMQIN
msgLog "Processing $q now"
ln -sf $f $workFile
chmod 644 $f
su - $own >>/dev/null 2>&1 <<%
cd /usr/apps/ipg/ver001/srv/locus
. setenv.locus
cd /usr/apps/dmq/beta
export DMQ_FULL_AUDIT_$q=FILE
$Tcl $q $RdrTraceLevel &
%
return 0
}
GetNextFile(){
set -x
typeset t f
integer size=0
# get the first one 70.1.vax 70.2.vax 70.3.vax
for i
do
t=$RdrVaxTokenDir/$i
f=$RdrVaxFileDir/$i
[ -f $f ] || {
msgLog "Panic: Missing file for token $i ???"
mv $t $RdrVaxTokenErrDir
continue
}
[ -s $f ] || {
msgLog "Panic: Empty file for token $i ???"
mv $t $RdrVaxTokenErrDir
mv $f $RdrVaxFileArDir
continue
}
((size=$(ls -l $f | awk '{ print $5 }')))
msgLog "Processing $i size = ${size} ready "
sleep 4 #Added due to processing issue;
chmod 644 $f
mv $t $RdrVaxTokenArDir
mv $f $RdrVaxFileArDir
print $RdrVaxFileArDir/$i
return 0
done
return 1
}
timeRefFileCreate(){
set -x
typeset q=${1-??}
integer x=${2-5}
typeset ref=/usr/apps/dmq/beta/xtime.0$q
integer m=$(date +%m) d=$(date +%d) h=$(date +%H) n=$(date +%M)
if (( $n >= 5 ))
then
((n=n-x))
else
return 0
fi
typeset -Z2 m d h n
touch -t ${m}${d}${h}${n} $ref
print $ref
}
rdrIsActive(){
set -x
typeset q=${1-??}
typeset min=${2-5}
typeset dir=${3-$BetaDir}
typeset ref=$(timeRefFileCreate $q $min)
typeset rdLog=dmqlog.0$q
typeset list
# if dmqlog.0?? file is not updated in 5 mins, then stop
find $dir -type f -newer ${ref} -print | grep "dmqlog.0$q" > /dev/null 2>&1
}
rdrIsUp(){
set -x
typeset q=$1
# if current reader not found, then prepare to process the current queue
#ps -ef | grep "[0-9] $Tcl $q $RdrTraceLevel" > /dev/null 2>&1
ps -ef | grep "$Tcl $q $RdrTraceLevel" > /dev/null 2>&1
}
rdrIsAvailable(){
set -x
typeset q=$1
# is reader running
rdrIsUp $q || {
msgLog "Reader $q is not running"
return 0
}
msgLog "Reader $q is running"
# is reader working
rdrIsActive $q && {
msgLog "Reader $q is running and working"
return 1
}
msgLog "Reader $q is running but not working"
return 1
}
lnName(){
set -x
typeset lnk=$1 nameLong nameShort
[ -L $lnk ] || return 1
nameLong=$(ls -l $lnk | awk '{print $11}')
nameShort=${nameLong##*/}
print $nameShort
}
fileSize(){
set -x
typeset f i
integer size s
for i
do
f=$RdrVaxFileDir/$i
s=$(ls -l $f 2>/dev/null | awk '{ print $5 }')
size=size+s
done
print $size
}
tokenList(){
set -x
typeset q=$1
typeset pat=${q}*.vax
typeset list
# no token ( 0 ) means false ( 1 ); otherwise print the files
set -A list $(cd $RdrVaxTokenDir; ls $pat 2> /dev/null)
((${#list[*]})) && print ${list[*]}
}
tokenSetLast(){
set -x
typeset q=$1
typeset Last=$RdrVaxTokenDir/Last.$q # point to the last token file
typeset list name f
# no token file then goto next queue
echo "\n==== Check whether there is any token file for Queue $q..."
set -A list $(tokenList $q)
((${#list[*]})) || {
msgLog "No token file for $q"
return 1
}
msgLog "Reader $q total file = ${#list[*]} total size = $(fileSize ${list[*]})"
# set the last token file
integer n=$((${#list[*]}-1))
name=${list[n]}
# same file then process next file
[ "$(lnName $Last)" = $name ] && {
msgLog "Reader $q no new file arrived "
return 0
}
# link the last file
f=$RdrVaxFileDir/$name
ln -sf $f $Last
msgLog "Reader $q received $name "
}
### processing each queue transaction
rdrProc(){
set -x
typeset q=$1
typeset Cur=$DmqVaxHome/Cur.$q # point to the current file
typeset Last list f
echo "\n == Handling the Queue in detail for Queue : $q @ `date` ==\n"
# set the last token file received
echo "\n==== # set the last token file received ...."
tokenSetLast $q || {
msgLog "Goto next queue"
return 0
}
# is reader available
echo "\n==== Verify whether reader is available ..."
rdrIsAvailable $q || {
msgLog "Reader $q is not available"
return 0
}
# reader is available: let log the end of work, set Cur, Last Ptr
echo "\n======# reader is available: let log the end of work, set Cur, Last Ptr ..."
[ -L $Cur ] && {
Last=$DmqVaxHome/Last.$q
mv $Cur $Last
msgLog "Reader $q file = $(lnName $Last) done"
}
# check for more to-do
echo "\n==== Get token file for Queue $q ...."
set -A list $(tokenList $q)
((${#list[*]})) || return 0
# get the next file
echo "\n==== Get first available file in Queue $q to process ...."
print "allnextfile=${list[*]}"
f=$(GetNextFile ${list[*]})
print "nextfile=$f"
[ "$f" ] || return 0
# set the current file
ln -sf $f $Cur
# start the reader
echo "\n===== Start the actual reader for $f @ `date`====\n"
rdrStart $q $f
}
### check the environments and subsystems
tuxIsUp(){
set -x
# check the tuxedo instance mode
cd /usr/apps/ipg/ver001/srv/locus
. ./setenv.locus
# bchong 2003/07/15 ulog error 557
# tuxpsr.ksh > /dev/null 2>&1
}
infIsUp(){
set -x
typeset db=ipdb
typeset dbstate=On-Line
# check the database instance mode
cd /home/informix
. ./setenv.inf ipdb
currstate=`onstat -u | grep Informix | awk '{ print $8 }'`
[ $dbstate = $currstate ]
}
locsrvCnt(){
set -x
typeset locCnt0=10
# check the locsrv process
locCnt1=`ps -ef | grep locsrv | grep locus | wc -l`
(( $locCnt0 == $locCnt1 ))
}
fsUsedIs(){
set -x
typeset fs="$1"
typeset pct="$2"
# check filesystem /usr/apps usage
df $fs | awk '{ print $4 }' | tail -1 | egrep "$pct" > /dev/null 2>&1
}
### main program starts from here;
# check the control reference file
date
echo "+++++ Control file check ....."
[ -f $RefFile ] || {
set -x
msgLog "Error: no control file"
exit 1
}
# check the /usr/apps filesystem usage
date
echo "+++++ /usr/apps Disk Space check ....."
fsUsedIs $AppsDir "(9[0-9]|100)" && {
set -x
msgLog "Error: File System $AppsDir is full"
mail -s "$AppsDir File System Space Low!!!" $aixsupport </dev/null
exit 1
}
# check the /dmqjtmp filesystem usage
date
echo "+++++ /dmqjtmp Disk Space check ....."
fsUsedIs $DmqJtmp "(9[0-9]|100)" && {
set -x
msgLog "Error: File System $DmqJtmp is full"
mail -s "$DmqJtmp File System Space Low!!!" $aixsupport </dev/null
exit 1
}
# check locsrv server process
date
echo "+++++ Locus Service check ...."
locsrvCnt || {
set -x
msgLog "Error: Locsrv server process is wrong"
mail -s "Locsrv Server process is wrong!!!" $aixsupport </dev/null
exit 1
}
# check informix is up
date
echo "+++++ Informix check ...."
infIsUp || {
set -x
msgLog "Error: Informix is not up"
mail -s "Informix Server is not up!!!" $aixsupport </dev/null
exit 1
}
# check tuxedo is up
date
echo "+++++ Tuxedo check ..."
tuxIsUp || {
set -x
msgLog "Error: Tuxedo is not up"
mail -s "Tuxedo Server is not up!!!" $aixsupport </dev/null
exit 1
}
date
echo "+++++ Token Directory check ..."
[ -d $RdrVaxTokenDir ] || mkdir $RdrVaxTokenDir
[ -d $RdrVaxTokenArDir ] || mkdir $RdrVaxTokenArDir
[ -d $RdrVaxTokenErrDir ] || mkdir $RdrVaxTokenErrDir
echo "\n*** Start to process each queue @ `date` ****\n"
msgLog "\n<--- RUNNER opening --->"
for i in $QueList
do
msgLog "Reader $i start ###"
echo "\n ====== Start to handle Queue : $i @ `date` ======\n"
rdrProc $i
msgLog "Reader $i finish ###\n\n"
done
msgLog "<--- RUNNER closing --->\n"
touch $RefFile
exit 0
#####################################################################
# Script /insight/local/scripts/runner.71.ksh
#####################################################################
#!/bin/ksh
#####################################################################
#
# Name: runner.all.ksh
#
# Reference: n/a
#
# Description: Runner for the Reader
# Processing only Q71 file;
#
# Parameters: None
#
# Modification History:
#
# Date Name Description
# ------------------------------------------------
# 2002-10-01 Bob Chong Original
#
#####################################################################
set -v
set -x
typeset -ft rdrIsAvailable rdrStart
typeset -fu rdrIsAvailable rdrStart
# DMQ environment variable
Bus=11
Group=34
# Queue number
#QueList="10 21 22 31 32 34 41 46 51 52 70 81"
QueList="71"
DmqJtmp=/dmqjtmp
RdrVaxFileDir=$DmqJtmp/rcp
RdrVaxFileArDir=$RdrVaxFileDir/done
DmqVaxHome=/dmqjtmp/dmqvax
RdrVaxTokenDir=$DmqVaxHome/token
RdrVaxTokenArDir=$DmqVaxHome/tokendone
RdrVaxTokenErrDir=$DmqVaxHome/tokenerror
Log=/dmqjtmp/archiveRunnerLog/runner.log
RefFile=/dmqjtmp/archiveRunnerLog/runner.ref
AppsDir=/usr/apps
BetaDir=$AppsDir/dmq/beta
aixsupport="lchen@livingstonintl.com bchong@livingstonintl.com dguo@livingstonintl.com
tward@livingstonintl.com"
Tcl=./tcl
RdrTraceLevel=3
#RdrTraceLevel=4
msgLog(){
print `date` "$1" >> $Log
}
rdrStart(){
set -x
typeset q=$1 f=$2
typeset betaEnv=dmqbeta own=bbois
typeset workFile=$BetaDir/$Bus$Group${q}.DMQIN
msgLog "Processing $q now"
ln -sf $f $workFile
chmod 644 $f
su - $own >>/dev/null 2>&1 <<%
cd /usr/apps/ipg/ver001/srv/locus
. setenv.locus
cd /usr/apps/dmq/beta
export DMQ_FULL_AUDIT_$q=FILE
$Tcl $q $RdrTraceLevel &
%
return 0
}
GetNextFile(){
set -x
typeset t f
integer size=0
# get the first one 70.1.vax 70.2.vax 70.3.vax
for i
do
t=$RdrVaxTokenDir/$i
f=$RdrVaxFileDir/$i
[ -f $f ] || {
msgLog "Panic: Missing file for token $i ???"
mv $t $RdrVaxTokenErrDir
continue
}
[ -s $f ] || {
msgLog "Panic: Empty file for token $i ???"
mv $t $RdrVaxTokenErrDir
mv $f $RdrVaxFileArDir
continue
}
((size=$(ls -l $f | awk '{ print $5 }')))
msgLog "Processing $i size = ${size} ready "
sleep 2 #Added dut to processing issue;
chmod 644 $f
mv $t $RdrVaxTokenArDir
mv $f $RdrVaxFileArDir
print $RdrVaxFileArDir/$i
return 0
done
return 1
}
timeRefFileCreate(){
set -x
typeset q=${1-??}
integer x=${2-5}
typeset ref=/usr/apps/dmq/beta/xtime.0$q
integer m=$(date +%m) d=$(date +%d) h=$(date +%H) n=$(date +%M)
if (( $n >= 5 ))
then
((n=n-x))
else
return 0
fi
typeset -Z2 m d h n
touch -t ${m}${d}${h}${n} $ref
print $ref
}
rdrIsActive(){
set -x
typeset q=${1-??}
typeset min=${2-5}
typeset dir=${3-$BetaDir}
typeset ref=$(timeRefFileCreate $q $min)
typeset rdLog=dmqlog.0$q
typeset list
# if dmqlog.0?? file is not updated in 5 mins, then stop
find $dir -type f -newer ${ref} -print | grep "dmqlog.0$q" > /dev/null 2>&1
}
rdrIsUp(){
set -x
typeset q=$1
# if current reader not found, then prepare to process the current queue
ps -ef | grep "[0-9] $Tcl $q $RdrTraceLevel" > /dev/null 2>&1
}
rdrIsAvailable(){
set -x
typeset q=$1
# is reader running
rdrIsUp $q || {
msgLog "Reader $q is not running"
return 0
}
msgLog "Reader $q is running"
# is reader working
rdrIsActive $q && {
msgLog "Reader $q is running and working"
return 1
}
msgLog "Reader $q is running but not working"
return 1
}
lnName(){
set -x
typeset lnk=$1 nameLong nameShort
[ -L $lnk ] || return 1
nameLong=$(ls -l $lnk | awk '{print $11}')
nameShort=${nameLong##*/}
print $nameShort
}
fileSize(){
set -x
typeset f i
integer size s
for i
do
f=$RdrVaxFileDir/$i
s=$(ls -l $f 2>/dev/null | awk '{ print $5 }')
size=size+s
done
print $size
}
tokenList(){
set -x
typeset q=$1
typeset pat=${q}*.vax
typeset list
# no token ( 0 ) means false ( 1 ); otherwise print the files
set -A list $(cd $RdrVaxTokenDir; ls $pat 2> /dev/null)
((${#list[*]})) && print ${list[*]}
}
tokenSetLast(){
set -x
typeset q=$1
typeset Last=$RdrVaxTokenDir/Last.$q # point to the last token file
typeset list name f
# no token file then goto next queue
set -A list $(tokenList $q)
((${#list[*]})) || {
msgLog "No token file for $q"
return 1
}
msgLog "Reader $q total file = ${#list[*]} total size = $(fileSize ${list[*]})"
# set the last token file
integer n=$((${#list[*]}-1))
name=${list[n]}
# same file then process next file
[ "$(lnName $Last)" = $name ] && {
msgLog "Reader $q no new file arrived "
return 0
}
# link the last file
f=$RdrVaxFileDir/$name
ln -sf $f $Last
msgLog "Reader $q received $name "
}
### processing each queue transaction
rdrProc(){
set -x
typeset q=$1
typeset Cur=$DmqVaxHome/Cur.$q # point to the current file
typeset Last list f
# set the last token file received
tokenSetLast $q || {
msgLog "Goto next queue"
return 0
}
# is reader available
rdrIsAvailable $q || {
msgLog "Reader $q is not available"
return 0
}
# reader is available: let log the end of work, set Cur, Last Ptr
[ -L $Cur ] && {
Last=$DmqVaxHome/Last.$q
mv $Cur $Last
msgLog "Reader $q file = $(lnName $Last) done"
}
# check for more to-do
set -A list $(tokenList $q)
((${#list[*]})) || return 0
# get the next file
print "allnextfile=${list[*]}"
f=$(GetNextFile ${list[*]})
print "nextfile=$f"
[ "$f" ] || return 0
# set the current file
ln -sf $f $Cur
# start the reader
rdrStart $q $f
}
### check the environments and subsystems
tuxIsUp(){
set -x
# check the tuxedo instance mode
cd /usr/apps/ipg/ver001/srv/locus
. ./setenv.locus
# bchong 2003/07/15 ulog error 557
# tuxpsr.ksh > /dev/null 2>&1
}
infIsUp(){
set -x
typeset db=ipdb
typeset dbstate=On-Line
# check the database instance mode
cd /home/informix
. ./setenv.inf ipdb
currstate=`onstat -u | grep Informix | awk '{ print $8 }'`
[ $dbstate = $currstate ]
}
locsrvCnt(){
set -x
typeset locCnt0=10
# check the locsrv process
locCnt1=`ps -ef | grep locsrv | grep locus | wc -l`
(( $locCnt0 == $locCnt1 ))
}
fsUsedIs(){
set -x
typeset fs="$1"
typeset pct="$2"
# check filesystem /usr/apps usage
df $fs | awk '{ print $4 }' | tail -1 | egrep "$pct" > /dev/null 2>&1
}
### main
# check the control reference file
[ -f $RefFile ] || {
set -x
msgLog "Error: no control file"
exit 1
}
# check the /usr/apps filesystem usage
fsUsedIs $AppsDir "(9[0-9]|100)" && {
set -x
msgLog "Error: File System $AppsDir is full"
mail -s "$AppsDir File System Space Low!!!" $aixsupport </dev/null
exit 1
}
# check the /dmqjtmp filesystem usage
fsUsedIs $DmqJtmp "(9[0-9]|100)" && {
set -x
msgLog "Error: File System $DmqJtmp is full"
mail -s "$DmqJtmp File System Space Low!!!" $aixsupport </dev/null
exit 1
}
# check locsrv server process
locsrvCnt || {
set -x
msgLog "Error: Locsrv server process is wrong"
mail -s "Locsrv Server process is wrong!!!" $aixsupport </dev/null
exit 1
}
# check informix is up
infIsUp || {
set -x
msgLog "Error: Informix is not up"
mail -s "Informix Server is not up!!!" $aixsupport </dev/null
exit 1
}
# check tuxedo is up
tuxIsUp || {
set -x
msgLog "Error: Tuxedo is not up"
mail -s "Tuxedo Server is not up!!!" $aixsupport </dev/null
exit 1
}
[ -d $RdrVaxTokenDir ] || mkdir $RdrVaxTokenDir
[ -d $RdrVaxTokenArDir ] || mkdir $RdrVaxTokenArDir
[ -d $RdrVaxTokenErrDir ] || mkdir $RdrVaxTokenErrDir
msgLog "<--- RUNNER opening --->"
for i in $QueList
do
msgLog "Reader $i start ###"
rdrProc $i
msgLog "Reader $i finish ###"
done
msgLog "<--- RUNNER closing --->\n"
touch $RefFile
exit 0
#####################################################################
# Script /insight/local/scripts/iccdataupload/StartInsightUpload.ksh
#####################################################################
#!/usr/bin/ksh
echo
date
PATH=/usr/java6/jre/bin:$PATH
export PATH
echo "Change to working directory ...... \n"
cd /insight/local/scripts/iccdataupload
echo "Start ICCDataUpload ....\n"
#java -version
CLASSPATH=$CLASSPATH:.:/insight/local/scripts/iccdataupload/lib/ifxjdbc.jar:
export CLASSPATH
#nohup java ICCDataUpload >> ICCDataUpload.log &
java ICCDataUpload > unhandled_excps.out
#sleep 5
#PID=`ps -ef | grep "java ICCDataUpload" | grep -v "grep" | awk '{print $2}'`
#if [ $? -eq 1 ]
#then
# echo "\n ** ERROR: the ICCDataUpload is NOT been started ! **\n"
#else
# echo "ICCDataUpload has been started successfully ....\n"
# mail -s "ICCDataUpload has been completed successfully" lchen@livingstonintl.com <
/dev/null
#fi
cd /insight/local/scripts/iccdataupload/archive
alias rm='rm'
find . -mtime +3 -type d -exec rm -r {} \;
find /insight/local/scripts/iccdataupload/ -name "*log*.txt" -mtime +3 -exec rm {} \;
#####################################################################
# Script /insight/local/scripts/ICCSetExpiryDates/StartInsightSetExpiryDates.ksh
#####################################################################
#!/usr/bin/ksh
echo
date
PATH=/usr/java6/jre/bin:$PATH
export PATH
echo "Change to working directory ...... \n"
cd /insight/local/scripts/ICCSetExpiryDates
echo "Start ICCSetExpiryDates ....\n"
java -version
CLASSPATH=$CLASSPATH:.:/insight/local/scripts/ICCSetExpiryDates/lib/ifxjdbc.jar:
export CLASSPATH
#nohup java ICCSetExpiryDates >> ICCSetExpiryDates.log &
java ICCSetExpiryDates > unhandled_excps.out
#sleep 5
#PID=`ps -ef | grep "java ICCSetExpiryDates" | grep -v "grep" | awk '{print $2}'`
#if [ $? -eq 1 ]
#then
# echo "\n ** ERROR: the ICCSetExpiryDates is NOT been started ! **\n"
#else
# echo "ICCSetExpiryDates has been started successfully ....\n"
# mail -s "ICCSetExpiryDates has been completed successfully" lchen@livingstonintl.com <
/dev/null
#fi
#Clean the old log file;
find /insight/local/scripts/ICCSetExpiryDates/ -name "*log*.txt" -mtime +3 -exec rm {} \;
#####################################################################
# Script /insight/local/scripts/ICCBillingUpload/StartInsightBillingUpload.ksh
#####################################################################
#!/usr/bin/ksh
echo
date
PATH=/usr/java6/jre/bin:$PATH
export PATH
echo "Change to working directory ...... \n"
cd /insight/local/scripts/ICCBillingUpload
echo "Start ICCBillingUpload ....\n"
#Prepare ICCBillingUpload environment
unhandle_out=/insight/local/scripts/ICCBillingUpload/unhandled_excps.out
dataFile=/dmqjtmp/rcp/*.recv
process=`ps -ef | grep -v grep | grep 'java ICCBillingUpload' | wc -l`
[ $process -ne 0 ] && exit 0
if [ -f $dataFile ]; then
mv /dmqjtmp/rcp/*.recv /dmqjtmp/rcp/openItem
else
echo "No ICCBillingUpload Data Files"
mail -s "No ICCBliingUpload Data Files" lchen@livingstonintl.com < /dev/null
exit 0
fi
#Data Files processing start
rm -f $unhandle_out
#java -version
CLASSPATH=$CLASSPATH:.:/insight/local/scripts/ICCBillingUpload/lib/ifxjdbc.jar:/insight/local/s
cripts/ICCBillingUpload/lib/ifxlang.jar:/insight/local/scripts/ICCBillingUpload/lib/commons-
lang-2.1.jar:
export CLASSPATH
#nohup java ICCBillingUpload >> ICCBillingUpload.log &
java ICCBillingUpload >> unhandled_excps.out
#sleep 5
#PID=`ps -ef | grep "java ICCBillingUpload" | grep -v "grep" | awk '{print $2}'`
#if [ $? -eq 1 ]
#then
# echo "\n ** ERROR: the ICCBillingUpload is NOT been started ! **\n"
#else
# echo "ICCBillingUpload has been started successfully ....\n"
# mail -s " ICCBillingUpload has been completed successfully " lchen@livingstonintl.com <
/dev/null
#fi
#cd /dmqjtmp/rcp/done
#alias rm='rm'
#find . -mtime +3 -type d -exec rm -r {} \;
#find /dmqjtmp/rcp/done -name "*log*.txt" -mtime +3 -exec rm {} \;
#####################################################################
# Script /insight/local/scripts/emailDog.ksh
#####################################################################
#!/bin/ksh
###############################################################################
#
# Name: emailDog.ksh
#
# Reference: n/a
#
# Description: email IFX01 Morning Readiness Anouncement
#
# Parameters: None
#
# Modification History:
#
# Date Name Description
# --------------------------------------------------------
# 2002-10-22 Bob Chong Original
# 2006-07-25 Denny Guo Modified
#
###############################################################################
set -v
set -x
# log and reference files
msgLog=/dmqjtmp/archiveEdogLog/emailDog.log
reFile=/dmqjtmp/archiveEdogLog/emailDog.ref
# email user list
#set -A AlertList bchong@livingstonintl.com \
set -A AlertList lchen@livingstonintl.com \
bchong@livingstonintl.com \
tward@livingstonintl.com \
dguo@livingstonintl.com \
computerops@livingstonintl.com
msgLog(){
set -x
# print `date` "$1" >> $msgLog
print `date` "\n$1\n" >> $msgLog
}
msgAction(){
set -x
msg="URGENT: please call the IP Unix administrator immediately!"
print "\n$msg\n"
}
msgAlert(){
set -x
#msgAction | mail -s "IFX01 Subsystem Error Message" ${AlertList[*]}
#mail -s "IFX01 Subsystem Error Message" ${AlertList[*]} < $msgLog
mail -s "Please call Unix Administrator ASAP!" ${AlertList[*]} < $msgLog
}
msgMorning(){
set -x
msg="
IP Server is operating normally:\n
1. Unix is running\n
2. Tuxedo is running\n
3. Informix is running\n
4. Qmaster is running\n
5. ILDT Loader is running\n
Thanks.\n"
print "$msg\n"
}
### check the environments and subsystems
qmeIsUp(){
set -x
typeset integer qmeCnt0=4
qmeCnt1=`ps -ef|grep qme|grep -v 'grep'|wc -l`
(( $qmeCnt0 == $qmeCnt1 ))
}
tuxIsUp(){
set -x
# check the tuxedo instance mode
cd /usr/apps/ipg/ver001/srv/locus
. ./setenv.locus
# bchong 2003/07/24 ERROR ULOG 577
#tuxpsr.email.ksh > /dev/null 2>&1
}
infIsUp(){
set -x
typeset dbstate=On-Line
# check the database instance mode
cd /login/infown
. ./setenv.inf ipdb
currstate=`onstat -u | grep Informix | awk '{ print $8 }'`
[ $dbstate = $currstate ]
}
locsrvCnt(){
set -x
typeset integer locCnt0=10
# check the locsrv process
locCnt1=`ps -ef | grep locsrv | grep locus | wc -l`
(( $locCnt0 == $locCnt1 ))
}
fsUsedIs(){
set -x
typeset fs="$1"
typeset pct="$2"
# check filesystem /usr/apps usage
df $fs | awk '{ print $4 }' | tail -1 | egrep "$pct" > /dev/null 2>&1
}
### main
# check the control reference file
[ -f $reFile ] || {
set -x
msgLog "Error: no control file"
msgAlert
exit 1
}
# check the /usr/apps filesystem usage
AppsDir=/usr/apps
fsUsedIs $AppsDir "(9[0-9]|100)" && {
set -x
msgLog "Error: File System $AppsDir is full"
msgAlert
exit 1
}
# check locsrv server process
locsrvCnt || {
set -x
msgLog "Error: Locsrv server process is wrong"
msgAlert
exit 1
}
# check informix is up
infIsUp || {
set -x
msgLog "Error: Informix is not up"
msgAlert
exit 1
}
# check tuxedo is up
tuxIsUp || {
set -x
msgLog "Error: Tuxedo is not up"
msgAlert
exit 1
}
#qmeIsUp || {
# set -x
# msgLog "Error: Qmaster queue manager is not up"
# msgAlert
# exit 1
#}
msgLog "IP all subsystems are up and running"
msgMorning | mail -s "Production IP Status" ${AlertList[*]}
touch $reFile
exit 0
#####################################################################
# Script /insight/local/scripts/alertDog.ksh
#####################################################################
#!/bin/ksh
##############################################################################
#
# Name: alertDog.ksh
#
# Reference: n/a
#
# Description: monitor the system error report message
#
# Parameters: None
#
# Modification History:
#
# Date Name Description
# -------------------------------------------------------
# 2002-10-22 Bob Chong Original
# 2006-10-05 Denny Guo Modified
#
##############################################################################
set -v
set -x
# log and reference files
msgLog=/dmqjtmp/archiveAdogLog/alertDog.log
errRpt=/dmqjtmp/archiveAdogLog/syserr.rpt
reFile=/dmqjtmp/archiveAdogLog/alertDog.ref
# email user list
#set -A AlertList aixsupport@livingstonintl.com \
# computerops@livingstonintl.com
set -A AlertList lchen@livingstonintl.com \
bchong@livingstonintl.com \
dguo@livingstonintl.com \
tward@livingstonintl.com \
computerops@livingstonintl.com
msgLog(){
set -x
print `date` "$1" >> $msgLog
}
msgAction(){
set -x
# msg="URGENT: please call the IP Unix administrator immediately!"
# print "\n$msg\n"
echo "URGENT: please call the IP Unix administrator immediately!" > $errRpt
errpt -a >> $errRpt
}
msgAlert(){
set -x
#msgAction | mail -s "IFX01 System Error Message" ${AlertList[*]}
mail -s "System Error Reported on IFX01!" ${AlertList[*]} < $errRpt
}
### check the system error message
anyErrpt() {
set -x
typeset integer errptCnt0=0
errptCnt1=`errpt | wc -l`
(( $errptCnt0 == $errptCnt1 ))
}
### main
# check the control reference file
[ -f $reFile ] || {
set -x
msgLog "Error: no control file"
msgAlert
exit 1
}
anyErrpt || {
set -x
msgLog "Error: ** System Error Reported! **"
msgAction && msgAlert
exit 1
}
msgLog "No Errpt Error Message"
touch $reFile
exit 0
#####################################################################
# Script /insight/local/scripts/watchDog.ksh
#####################################################################
#!/bin/ksh
##############################################################################
#
# Name: watchDog.ksh
#
# Reference: n/a
#
# Description: monitor the INSIGHT subsystems
#
# Parameters: None
#
# Modification History:
#
# Date Name Description
# -------------------------------------------------------
# 2002-10-22 Bob Chong Original
# 2006-07-26 Denny Guo Modified
#
##############################################################################
set -v
set -x
# log and reference files
msgLog=/dmqjtmp/archiveWdogLog/watchDog.log
reFile=/dmqjtmp/archiveWdogLog/watchDog.ref
# email user list
#set -A AlertList bchong@livingstonintl.com \
#set -A AlertList aixsupport@livingstonintl.com \
# computerops@livingstonintl.com
set -A AlertList lchen@livingstonintl.com \
bchong@livingstonintl.com \
dguo@livingstonintl.com \
tward@livingstonintl.com \
computerops@livingstonintl.com
msgLog(){
set -x
print `date` "\n$1\n" >> $msgLog
}
msgAction(){
set -x
msg="URGENT: please call the IP Unix administrator immediately!"
print "\n$msg\n"
}
msgAlert(){
set -x
#msgAction | mail -s "IFX01 Subsystem Error Message" ${AlertList[*]}
#By DGuo 2006-07-26
mail -s "IFX01 Error,Please Call Unix Administrator ASAP!" ${AlertList[*]} < $msgLog
}
### check the environments and subsystems
#No need to check qme, the VPOM has been retired on 2011/04/06
qmeIsUp(){
set -x
typeset integer qmeCnt0=4
ps -ef|grep qme|grep -v 'grep'
qmeCnt1=`ps -ef|grep qme|grep -v 'grep'|wc -l`
(( $qmeCnt1 >= $qmeCnt0 ))
}
tuxIsUp(){
set -x
# check the tuxedo instance mode
cd /usr/apps/ipg/ver001/srv/locus
. ./setenv.locus
tuxpsr.watch.ksh > /dev/null 2>&1
}
infIsUp(){
set -x
typeset db=ipdb
typeset dbstate=On-Line
# check the database instance mode
cd /home/informix
. ./setenv.inf ipdb
currstate=`onstat -u | grep Informix | awk '{ print $8 }'`
[ $dbstate = $currstate ]
}
locsrvCnt(){
set -x
typeset integer locCnt0=10
# check the locsrv process
locCnt1=`ps -ef | grep locsrv | grep locus | wc -l`
(( $locCnt0 == $locCnt1 ))
}
fsUsedIs(){
set -x
typeset fs="$1"
typeset pct="$2"
# check filesystem /usr/apps & /dmqjtmp usage
df $fs | awk '{ print $4 }' | tail -1 | egrep "$pct" > /dev/null 2>&1
}
### main
# check the control reference file
[ -f $reFile ] || {
set -x
msgLog "Error: no control file"
msgAlert
exit 1
}
# check the /usr/apps filesystem usage
AppsDir=/usr/apps
fsUsedIs $AppsDir "(9[0-9]|100)" && {
set -x
msgLog "Error: File System $AppsDir is full"
msgAlert
exit 1
}
# check the /dmqjtmp filesystem usage
AppsDir=/dmqjtmp
fsUsedIs $AppsDir "(9[0-9]|100)" && {
set -x
msgLog "Error: File System $AppsDir is full"
msgAlert
exit 1
}
# check locsrv server process
locsrvCnt || {
set -x
msgLog "Error: Locsrv server process is wrong"
msgAlert
exit 1
}
# check informix is up
infIsUp || {
set -x
msgLog "Error: Informix is not up"
msgAlert
exit 1
}
# check tuxedo is up
tuxIsUp || {
set -x
msgLog "Error: Tuxedo is not up"
msgAlert
exit 1
}
#No need to check qme, the VPOM has been retired on 2011/04/06
#qmeIsUp || {
# set -x
# msgLog "Error: Qmaster queue manager is not up"
# msgAlert
# exit 1
#}
msgLog "IP all subsystems are up and running"
touch $reFile
exit 0
#####################################################################
# Script /insight/local/scripts/tuxLogClean.ksh
#####################################################################
#!/bin/ksh
###############################################################################
#
# Name: tuxLogClean.ksh
#
# Reference: n/a
#
# Description: backup locus tuxedo instance ULOG file and achive
#
# Parameters: None
#
# Modification History:
#
# Date Name Description
# -----------------------------------------------------------
# 2002-10-22 Bob Chong Original
# 2006-08-01 Denny Guo Modified
#
################################################################################
set -v
set -x
TuxDir=/usr/apps/ipg/ver001/srv
LocusInstance=$TuxDir/locus
clean(){
set -x
integer n=$2
dir=$1
ar=ULOGS
Logs=ULOG.*
cd $dir
[ -L $ar ] || exit 1
set `ls -t $Logs`
shift
[ $1 ] && mv $* $ar
cd $ar
set `ls -t $Logs`
until (((n-=1)<0))
do
[ $1 ] && shift
done
rm -f $*
compress $Logs >/dev/null 2>&1
}
clean $LocusInstance 3
#
InsightDir=/usr/apps/ipg/ver001/srv/insight
BdsDir=/usr/apps/ipg/ver001/srv/bds/pgm/ip_0p
find $InsightDir -name "ULOG.*" -type f -mtime +4 -exec /usr/bin/rm -f {} \;
find $BdsDir -name "ULOG.*" -type f -mtime +4 -exec /usr/bin/rm -f {} \;
exit 0
#####################################################################
# Script /insight/local/scripts/oprLogClean.ksh
#####################################################################
#!/bin/ksh
#########################################################################
#
# Name: oprLogClean.ksh
#
# Reference: n/a
#
# Description: archive the data file
# archive the token file
# archive the token error file
# archive the runner log file
# archive the ipDog log file
#
# Parameters: None
#
# Modification History:
#
# Date Name Description
# --------------------------------------------------------
# 2002-10-22 Bob Chong Original
#
###########################################################################
set -v
set -x
BIN=/insight/local/scripts
ARN=$BIN/arn.ksh
RdrVaxFileArDir=/dmqjtmp/rcp/done
RdrVaxTokenArDir=/dmqjtmp/dmqvax/tokendone
RdrVaxTokenErrDir=/dmqjtmp/dmqvax/tokenerror
RunnerLog=/dmqjtmp/archiveRunnerLog/runner.log
RunnerOut=/dmqjtmp/archiveRunnerLog/runner.out
AlertDogLog=/dmqjtmp/archiveAdogLog/alertDog.log
AlertDogOut=/dmqjtmp/archiveAdogLog/alertDog.out
WatchDogLog=/dmqjtmp/archiveWdogLog/watchDog.log
WatchDogOut=/dmqjtmp/archiveWdogLog/watchDog.out
EmailDogLog=/dmqjtmp/archiveEdogLog/emailDog.log
EmailDogOut=/dmqjtmp/archiveEdogLog/emailDog.out
$ARN $RdrVaxFileArDir 3 && mkdir $RdrVaxFileArDir
$ARN $RdrVaxTokenArDir 3 && mkdir $RdrVaxTokenArDir
$ARN $RdrVaxTokenErrDir 3 && mkdir $RdrVaxTokenErrDir
$ARN $RunnerLog 3 && >$RunnerLog
$ARN $RunnerOut 3 && >$RunnerOut
$ARN $AlertDogLog 3 && >$AlertDogLog
$ARN $AlertDogOut 3 && >$AlertDogOut
$ARN $WatchDogLog 3 && >$WatchDogLog
$ARN $WatchDogOut 3 && >$WatchDogOut
$ARN $EmailDogLog 3 && >$EmailDogLog
$ARN $EmailDogOut 3 && >$EmailDogOut
#VaxDataDir=$DmqJtmp/vaxdata
#VaxTokenDir=$DmqJtmp/vaxtoken
#( cd $VaxDataDir; compress * >/dev/null 2>&1 )
#$ARN $VaxDataDir 3 && mkdir $VaxDataDir && chown dmqvax $VaxDataDir
#$ARN $VaxTokenDir 3 && mkdir $VaxTokenDir && chown dmqvax $VaxTokenDir
exit 0
#####################################################################
# Script /insight/local/scripts/betaLogClean.ksh
#####################################################################
#!/bin/ksh
################################################################################
#
# Name: betaLogClean.ksh
#
# Reference: n/a
#
# Description: backup ierr*, iaud*, dmqlog.* and archive to LOGS dir
#
# Parameters: None
#
# Modification History:
#
# Date Name Description
# ----------------------------------------------------------
# 2002-10-22 Bob Chong Original
#
#################################################################################
set -v
set -x
Beta=/usr/apps/dmq/beta
BetaLogs=/dmqjtmp/archiveBetaLog/beta/LOGS
BIN=/insight/local/scripts
ARN=$BIN/arn.ksh
psCountEq(){
set -x
[ `ps -ef | egrep "$1" | wc -l` = $2 ]
}
IsReaderDown(){
set -x
psCountEq "[0-9] \./(tcl|cta)" 0
}
archiveLogList(){
set -x
RdDir=$1
Vlogs="$RdDir/i???0??.????????"
Dlogs="$RdDir/dmq???.???"
Logs="$Vlogs $Dlogs"
ArDir=$2
n=3
([ -d $ArDir ] || [ -f $ArDir ]) && $ARN $ArDir $n
(ls $Vlogs || ls $Dlogs) >/dev/null 2>&1 || return 0
mkdir $ArDir && { mv $Logs $ArDir 2>/dev/null; $ARN $ArDir $n; }
}
archiveLog(){
set -x
archiveLogList $Beta $BetaLogs || print "betaLogClean failed!"
}
IsReaderDown && { archiveLog; return 0; }
exit 0
#####################################################################
# Script /insight/local/scripts/bdsLogClean.ksh
#####################################################################
#!/bin/ksh
################################################################################
#
# Name: betaLogClean.ksh
#
# Reference: n/a
#
# Description: Download related Logs and Archive to LOGS dir
#
# Parameters: None
#
# Modification History:
#
# Date Name Description
# ----------------------------------------------------------
# 2007-04-17 Denny Guo Original
#
#################################################################################
set -v
set -x
Bds=/usr/apps/ipg/ver001/srv/bds/pgm/ip_0p
BdsLogs=/dmqjtmp/archiveBdsLog/bds/LOGS
BIN=/insight/local/scripts
ARN=$BIN/arn.ksh
archiveLogList(){
set -x
RdDir=$1
B3EXlogs="$RdDir/B3EX_????????.log"
B3RXlogs="$RdDir/B3RX_????????.log"
B3PXlogs="$RdDir/B3PX_????????.log"
TRFXlogs="$RdDir/TRFX_????????.log"
AINXlogs="$RdDir/AINX_????????.log"
QLISlogs="$RdDir/qlistener_????????.log"
BDSlogs="$RdDir/bds_????????.log"
XBATSlogs="$RdDir/xbats600_????????.log"
Logs="$B3EXlogs $B3RXlogs $B3PXlogs $TRFXlogs \
$AINXlogs $QLISlogs $BDSlogs $XBATSlogs"
ArDir=$2
n=3
([ -d $ArDir ] || [ -f $ArDir ]) && $ARN $ArDir $n
mkdir $ArDir && { mv $Logs $ArDir 2>/dev/null; $ARN $ArDir $n; }
}
archiveLog(){
set -x
archiveLogList $Bds $BdsLogs || print "betaLogClean failed!"
}
archiveLog
cat /dev/null > $Bds/encrypt.log
exit 0
#####################################################################
# Script /insight/local/scripts/ptrLogClean.ksh
#####################################################################
#!/bin/ksh
################################################################################
#
# Name: ptrLogClean.ksh
#
# Reference: n/a
#
# Description: cleanup the ptr files under /usr/apps/dmq/beta
#
# Parameters: None
#
# Modification History:
#
# Date Name Description
# -------------------------------------------------------
# 2002-10-25 Bob Chong Original
#
##################################################################################
set -x
date
find /usr/apps/dmq/beta -name "dmqptr_*.ptr" -type f -mtime +3 -exec /usr/bin/rm -f {} \; \
> /dev/null 2>&1
find /usr/apps/dmq/beta -name "113410_000*.MMA" -type f -mtime +3 -exec /usr/bin/rm -f {} \; \
> /dev/null 2>&1
exit 0
#####################################################################
# Script /insight/local/scripts/bkupLogClean.ksh
#####################################################################
#!/bin/ksh
################################################################################
#
# Name: bkupLogClean.ksh
#
# Reference: n/a
#
# Description: cleanup the backup log files under /sitemgr/backup/workarea
#
# Parameters: None
#
# Modification History:
#
# Date Name Description
# -------------------------------------------------------
# 2002-10-25 Bob Chong Original
#
##################################################################################
set -x
date
find /dmqjtmp/archiveSysbkupLog -mtime +30 -exec /usr/bin/rm -f {} \; \
> /dev/null 2>&1
find /dmqjtmp/archiveAppbkupLog -mtime +7 -exec /usr/bin/rm -f {} \; \
> /dev/null 2>&1
find /dmqjtmp/archiveDbsbkupLog -mtime +7 -exec /usr/bin/rm -f {} \; \
> /dev/null 2>&1
find /dmqjtmp/archiveFfileLog -mtime +7 -exec /usr/bin/rm -f {} \; \
> /dev/null 2>&1
exit 0
#####################################################################
# Script /insight/local/scripts/sqexplainClean.ksh
#####################################################################
#!/usr/bin/ksh
# null the sqexplain out & other log files
0 0 * * * cat /dev/null > /home/ipgown/sqexplain.out
0 0 * * * cat /dev/null > /home/ipinsdoc/sqexplain.out
0 0 * * * cat /dev/null > /home/ipuser/sqexplain.out
0 0 * * * cat /dev/null > /home/insrpt/sqexplain.out
0 0 * * * cat /dev/null > /usr/apps/ipg/ver001/srv/locus/sqexplain.out
0 0 * * * cat /dev/null > /usr/apps/ipg/ver001/srv/insight/sqexplain.out
0 0 * * * cat /dev/null > /usr/apps/ipg/ver001/srv/bds/pgm/ip_0p/sqexplain.out
#####################################################################
# Script /home/dguo/script/check_systape.ksh
#####################################################################
#####################################################################
# Script /insight/local/backup/sysbkup.ksh
#####################################################################
#!/bin/ksh
################################################################################
#
# Name: sysbkup.ksh
#
# Reference: n/a
#
# Description: system backup using mksysb
#
# Parameters: sysbkup.ksh <tape device>
# tape device /dev/rmt0
#
# Modification History:
#
# Date Name Description
# ------------------------------------------------------------
# 2002-10-29 Bob Chong Original
# 2006-10-06 Denny Guo Modified
# Add operator to mail list;
# 2007-02-13 Denny Guo Modified
# Check tape availability;
#
################################################################################
set -v
set -x
# script library
PATH=$PATH:/insight/local/backup/sitelib:.
cd /dmqjtmp/archiveSysbkupLog
backup_tape=/dev/$1
backup_lisfile=sysbkup_lis.
backup_errfile=sysbkup_err.
backup_logfile=sysbkup_log.
backup_date=`date +%Y%m%d%H%M`
lisfile=$backup_lisfile$backup_date
errfile=$backup_errfile$backup_date
logfile=$backup_logfile$backup_date
aixsupport="lchen@livingstonintl.com"
date >$logfile
Check_Tape()
{
# rewind the tape;
tctl -f $backup_tape rewind
if [[ $? -eq 0 ]]
then
return 0 #tape is ready;
else
date > $errfile
echo "\nError: tape is not ready" >> $errfile
mail -s "Tape is not ready for SYSTEM backup on IFX01 @ $backup_date"
computerops@livingstonintl.com < $errfile
return 1
fi
}
count=3
while [[ $count -gt 0 ]]; do
Check_Tape
if [[ $? -eq 0 ]]; then
break #tape ready, continue to do backup;
else
count=$(($count-1))
banner "Tapes" Please!!!
if [[ $count -eq 0 ]]
then
mail -s "Ifx01 Sysbackup Failed Due to Tape not ready ..."
computerops@livingstonintl.com < /dev/null
mail -s "Ifx01 Sysbackup Failed Due to Tape not ready ..." $aixsupport <
/dev/null
exit 1
fi
fi
sleep 120
done
# backup of the operating system (that is, the root volume group)
mksysb -e -p -i $backup_tape 1>>$logfile 2>&1
errsts=$?
if (($errsts != 0))
then
errevent $logfile "<error = $errsts> error on mksysb command:"
mail -s "System backup failed (IFX01): $backup_date" $aixsupport <$logfile
mail -s "System backup failed (IFX01): $backup_date" computerops@livingstonintl.com <$logfile
tctl -f $backup_tape offline
exit 1
fi
date >> $logfile
# rewind the tape
bot.check $backup_tape $logfile
# finally list all the files on tape
eventlog $logfile "----------------------------------------------------"
eventlog $logfile "Listing of the root volume group:" | tee -a $lisfile
eventlog $logfile "----------------------------------------------------"
/usr/sbin/restore -Tqs4 -f $backup_tape.1 >> $lisfile 2>> $logfile
errsts=$?
if (($errsts != 0))
then
errevent $logfile "\t <$errsts> error on readcheck of system backup" | tee -a $lisfile
eventlog $logfile "\tDumping the contents of error file:"
mail -s "System backup failed (IFX01): $backup_date" $aixsupport
mail -s "System backup failed (IFX01): $backup_date" computerops@livingstonintl.com
tctl -f $backup_tape offline
exit 1
fi
cat $errfile | tee -a $logfile
eventlog $logfile "----------------------------------------------------"
#rm $errfile
eventlog $logfile "SYSTEM BACKUP task has been completed"
eventlog $logfile "----------------------------------------------------"
sleep 3
# dismount the tape
tctl -f $backup_tape offline
sleep 3
#Mail to Administrator;
mail -s "System backup successful (IFX01): $backup_date" $aixsupport <$logfile
#Mail to operator;
mail -s "System backup successful (IFX01): $backup_date" computerops@livingstonintl.com
<$logfile
exit 0
#####################################################################
# Script /home/dguo/script/check_apptape.ksh
#####################################################################
#####################################################################
# Script /insight/local/backup/appbkup.ksh
#####################################################################
#!/bin/ksh
################################################################################
#
# Name: appbkup.ksh
#
# Reference: n/a
#
# Description: application backup using muilt-backup
# backup all filesystems except database filesystems
#
# Parameters: appbkup <tape device>
# tape device /dev/rmt1
#
# Modification History:
#
# Date Name Description
# --------------------------------------------------------------
# 2002-10-30 Bob Chong Original
# 2006-10-06 Denny Guo Modified
# Add operators to mail list;
# 2007-02-13 Denny Guo Modified
# Check tape availability
#
################################################################################
set -v
set -x
# script library
PATH=/insight/local/backup/sitelib:$PATH:.
# log file directory
cd /dmqjtmp/archiveAppbkupLog
# database filesystems
ixFS=/ix_root:/ix_plog:/ix_llog:/ix_dat1:/ix_dat2:/ix_dat3:/ix_idx1:/ix_idx2:/ix_idx3:/ix_temp
arFS=/ach_root:/ach_plog:/ach_llog:/ach_dat1:/ach_dat2:/ach_dat2_2:/ach_dat1_2:/ach_idx1:/ach_i
dx2:/ach_temp
dbFS=${ixFS}:${arFS}
# exclude filesystems
xList=`echo ${dbFS} | sed "s/:/:|/g"`":"
# backup level and tape driv
backup_level=0
backup_tape=/dev/$1
# log files
backup_lisfile=level0_lis.
backup_errfile=level0_err.
backup_logfile=level0_log.
backup_date=`date +%Y%m%d%H%M`
lisfile=$backup_lisfile$backup_date
errfile=$backup_errfile$backup_date
logfile=$backup_logfile$backup_date
aixsupport="lchen@livingstonintl.com"
Check_Tape()
{
# rewind the tape
tctl -f $backup_tape rewind
if [[ $? -eq 0 ]]
then
return 0 #tape is ready fro backup;
else
date > $errfile
echo "\nError: tape is not ready" >> $errfile
mail -s "Tape is not ready for APPLICATION backup on IFX01 @ $backup_date"
computerops@livingstonintl.com < $errfile
return 1
fi
}
count=3
while [[ $count -gt 0 ]]; do
Check_Tape
if [[ $? -eq 0 ]]; then
break #tape ready, continue to do the backup;
else
count=$(($count-1))
banner "Tapes Please!!!"
if [[ $count -eq 0 ]]
then
mail -s "Ifx01 Application Backup Failed Due to Tape not ready ..." $aixsupport <
/dev/null
mail -s "Ifx01 Apllication Backup Failed Due to Tape not ready ..."
computerops@livingstonintl.com < /dev/null
exit 1
fi
fi
sleep 120
done
# get a list of the mounted "jfs" filesystems exclude database filesystems
filesys=`lsfs -c -v jfs2 | tail +2 | grep -v -E "$xList" | cut -f1 -d":"`
# log the archive filesystems
eventlog $logfile "Backup filesystem listing=$filesys"
# get number of filesystem in the string "filesys"
set $filesys
integer fsCount=$#
# do not use <*> inside argument values : <*> is translated as <ls *>
eventlog $lisfile "-------------------------------------------------------"
eventlog $lisfile "BACKUP LEVEL: $backup_level"
eventlog $lisfile "BACKUP DATE : `date`"
eventlog $lisfile "-------------------------------------------------------"
# Backup the file systems in the listing
integer xCount=0
line=$filesys
set $line
while (( $xCount < $fsCount ))
do
eventlog $logfile "----------------------------------------------------"
eventlog $logfile "Backing up filesystem: $1"
eventlog $logfile "----------------------------------------------------"
sync
sleep 5
backup -$backup_level -uf $backup_tape.1 $1 2>&1 | tee -a $logfile
errsts=$?
if (($errsts != 0))
then
errevent $logfile "<error = $errsts> error on backing up filesystem: $1"
mail -s "Application backup failed (IFX01): $backup_date" $aixsupport <$logfile
mail -s "Application backup failed (IFX01): $backup_date" computerops@livingstonintl.com
<$logfile
tctl -f $backup_tape offline
exit 1
fi
shift 1
xCount=xCount+1
done
# rewind the tape
bot.check $backup_tape $logfile
# finally list all the files on tape
integer xCount=1
line=$filesys
set $line
while (( $xCount <= $fsCount ))
do
eventlog $logfile "----------------------------------------------------"
eventlog $logfile "Listing of filesystem: $1 File number: $xCount" | tee -a $lisfile
eventlog $logfile "----------------------------------------------------"
restore -s1 -qvTf $backup_tape.1 >>$lisfile 2>$errfile
errsts=$?
eventlog $logfile "Dumping the contents of error file:"
cat $errfile | tee -a $logfile
if (($errsts != 0))
then
errevent $logfile "<error = $errsts> error on reading filesystem: $1" | tee -a $lisfile
mail -s "Application backup failed (IFX01): $backup_date" $aixsupport <$logfile
mail -s "Application backup failed (IFX01): $backup_date" computerops@livingstonintl.com
<$logfile
tctl -f $backup_tape offline
exit 1
fi
shift 1
xCount=xCount+1
done
#rm $errfile
eventlog $logfile "/etc/dumpdates at the close of this backup:"
sort /etc/dumpdates | tee -a $logfile
eventlog $logfile "BACKUP task has been completed"
sleep 5
# dismount the tape
tctl -f $backup_tape offline
sleep 5
#Mail to Administrator;
mail -s "Application backup successful (IFX01): $backup_date" $aixsupport <$logfile
#Mail to operator!
mail -s "Application backup successful (IFX01): $backup_date" computerops@livingstonintl.com
<$logfile
exit 0
#####################################################################
# Script /home/dguo/script/check_dbstape.ksh
#####################################################################
#####################################################################
# Script /insight/local/backup/dbsbkup.ksh
#####################################################################
#!/bin/ksh
################################################################################
#
# Name: dbsbkup.ksh
#
# Reference: n/a
#
# Description: a production database backup
# a level-0 backup using ontape
#
# Parameters: dbsbkup.ksh <tape device>
# tape device /dev/rmt1
#
# Modification History:
#
# Date Name Description
# ------------------------------------------------------------
# 2002-12-10 Bob Chong Original
#
################################################################################
set -v
set -x
date
su - informix -c "/insight/local/backup/infbkup.ksh" > /dmqjtmp/archiveDbsbkupLog/infbkup.out
2>&1
exit 0
#####################################################################
# Script /insight/local/scripts/getTxnRpt.pl
#####################################################################
#!/usr/bin/perl -w
@QueList = qw(10 21 22 31 32 34 41 46 51 52 70 71 81);
$FF_all = 0;
$Total_all = 0;
$Errors_all = 0;
$Records_all = 0;
$Messages_all = 0;
$Bytes_all = 0;
$c = "|";
$DmqlogDir = "/usr/apps/dmq/beta/LOGS/LOGS.0";
#$DmqlogDir = "/usr/apps/dmq/beta/LOGS/LOGS.1";
#$DmqlogDir = "/usr/apps/dmq/beta/LOGS/LOGS.2";
#$DmqlogDir = "/login/dguo/temp";
#$ReportFile = "/login/dguo/report/trans.rpt";
$stamp = `date +%Y%m%d`;
$ReportFile = "/dmqjtmp/archiveFfileLog/getTxnRpt.${stamp}";
open (OUTFILE,">$ReportFile");
# report for transactions
$DATE = `date`;
format OUTFILE_TOP =
Daily Transaction Report
@<<<<<<<<<<
$DATE
------------------------------------------------------------------------------
|QNum|Total_FF| Transaction | Messages | Errors| Records | Bytes |
.
format OUTFILE =
|----|--------|-------------|------------|-------|-------------|---------------|
@<@||@<@||||||@<@>>>>>>>>>>>@<@>>>>>>>>>>@<@>>>>>@<@>>>>>>>>>>>@<@>>>>>>>>>>>>>@<
$c,$qname,$c,$FF,$c,$Total,$c,$Messages,$c,$Errors,$c,$Records,$c,$Bytes,$c
.
sub convert
{
($num) = @_;
if ( $num > 999999999999 )
{
print "Exceed Max Number, quit!\n";
$num = 999999999999;
exit;
}
$num =
( $num =~ /(\d{1,3})(\d{3})(\d{3})(\d{3})$/ ) ?
sprintf "%d,%3s,%3s,%3s\n",$1,$2,$3,$4 :
( $num =~ /(\d{1,3})(\d{3})(\d{3})$/ ) ?
sprintf "%d,%3s,%3s\n",$1,$2,$3 :
( $num =~ /(\d{1,3})(\d{3})$/ ) ?
sprintf "%d,%3s\n",$1,$2 : $num ;
return $num;
}
$i = 0;
foreach (@QueList)
{
$qname = $_;
$FF = 0;
$Total = 0;
$Errors = 0;
$Records = 0;
$Messages = 0;
$Bytes = 0;
$i++;
next if ( ! -e "$DmqlogDir/dmqlog.0${qname}");
open (LOGFILE,"$DmqlogDir/dmqlog.0${qname}")||die " Open Files Failed ... ";
while (<LOGFILE>)
{
next if ( !/Transactions/ );
(undef,undef,undef,$num,$err,undef) = split(' ',$_);
$Total += $num;
$Errors += $err;
$FF++;
$_ = <LOGFILE>;
(undef,undef,undef,$num,undef) = split(' ',$_);
$Messages += $num;
$_ = <LOGFILE>;
(undef,undef,undef,$num,undef) = split(' ',$_);
$Records += $num;
$_ = <LOGFILE>;
(undef,undef,undef,$num,undef) = split(' ',$_);
$Bytes += $num;
}
close (LOGFILE);
$FF_all += $FF;
$Total_all += $Total;
$Records_all += $Records;
$Errors_all += $Errors;
$Messages_all += $Messages;
$Bytes_all += $Bytes;
$FF = convert($FF);
$Total = convert($Total);
$Records = convert($Records);
$Errors = convert($Errors);
$Messages = convert($Messages);
$Bytes = convert($Bytes);
write OUTFILE;
}
close(OUTFILE);
$FF_all = convert($FF_all);
$Total_all = convert($Total_all);
$Records_all = convert($Records_all);
$Errors_all = convert($Errors_all);
$Messages_all = convert($Messages_all);
$Bytes_all = convert($Bytes_all);
open (ENDFILE,">>$ReportFile");
format ENDFILE =
------------------------------------------------------------------------------
Total:
------------------------------------------------------------------------------
@<@||@<@||||||@<@>>>>>>>>>>>@<@>>>>>>>>>>@<@>>>>>@<@>>>>>>>>>>>@<@>>>>>>>>>>>>>@<
$c,$i,$c,$FF_all,$c,$Total_all,$c,$Messages_all,$c,$Errors_all,$c,$Records_all,$c,$Bytes_all,$c
------------------------------------------------------------------------------
.
write ENDFILE;
close (ENDFILE);
#`mail -s "IFX01 LocusTxnRpt.$stamp" dguo\@livingstonintl.com < $ReportFile`;
#####################################################################
# Script /insight/local/scripts/cron_bkup.ksh
#####################################################################
#!/usr/bin/ksh
#Save cron jobs for everybody
cd /insight/local/crontabs
cp -p /var/spool/cron/crontabs/* .
#####################################################################
# Script /sitemgr/b3_arch/run_autoarchive.ksh
#####################################################################
#####################################################################
# Script /insight/local/b3_arch/run_autoarchive.ksh
#####################################################################
#!/bin/ksh
#####################################################################
#Archive B3 data from Production instance to archive instance #
#Purge will done manually after the verification #
#Author : bob chong #
#Date : Sept 20, 2000 #
#####################################################################
umask 0000
INFORMIXSERVER="ardb"
INFORMIXDIR="/usr/apps/inf/ver115UC3"
GL_DATETIME="%iY/%m/%d %H:%M:%S"
PATH=$INFORMIXDIR/bin:$PATH
export INFORMIXDIR INFORMIXSERVER PATH GL_DATETIME
local_dir=/insight/local/b3_arch
log_dir=/dmqjtmp/archiveB3Log
week_no=`date +%w`
year_no=`date +%Y`
month_no=`date +%m`
day_no=`date +%d`
logfile=${log_dir}/${year_no}${month_no}${day_no}archive.log
aixsupport="lchen@livingstonintl.com"
cd $local_dir
echo "\nStarts monthly archive and (no purge) program on `date`"
echo "-----------------------------------------------------------------"
#Start archive and Purge job
su - informix -c ${local_dir}/autoArchive.ksh >> $logfile
echo "End of archive and (no purge) program - `date`"
echo "======================================================================"
echo "Please Stop the Cron Job and Verify the ARCHIVE!!!"| \
mail -s "Monthly B3 Archive Done @ `date`." $aixsupport
#!/bin/ksh
##################################################################################
#
# purpose: run update statistics medium
#
##################################################################################
export INFORMIXDIR=/usr/apps/inf/ver115UC3
export INFORMIXSERVER=ipdb
export PATH=$INFORMIXDIR/bin:$PATH
SQLDIR=/usr/apps/inf/bob/upstat
echo
date
time dbaccess < $SQLDIR/tbls_med.sql > $SQLDIR/tbls_med.out 2>&1
time dbaccess < $SQLDIR/tbls_high.sql > $SQLDIR/tbls_high.out 2>&1
time dbaccess < $SQLDIR/proc.sql > $SQLDIR/proc.out 2>&1
exit 0
root@ifx01:/usr/apps/inf/bob/upstat #ls -l
total 0
drwxr-sr-x 2 informix informix 256 Feb 12 2007 .OldJob
-rwxrw-rw- 1 informix informix 324 Jul 18 2003 Readme
-rw-r--r-- 1 informix informix 71 Aug 15 06:07 proc.out
-rw-r--r-- 1 informix informix 163 Mar 20 2009 proc.sql
-rw-r--r-- 1 informix informix 2109 Aug 15 06:07 tbls_high.out
-rw-r--r-- 1 informix informix 6445 Mar 20 2009 tbls_high.sql
-rw-r--r-- 1 informix informix 2285 Aug 15 00:44 tbls_med.out
-rw-r--r-- 1 informix informix 6812 Mar 20 2009 tbls_med.sql
-rw-r--r-- 1 informix informix 4850 Apr 24 2007 tbls_med_V1.sql
-rw-r--r-- 1 informix informix 3128 Apr 24 2007 tbls_med_V2.sql
-rwxrw-rw- 1 informix informix 404 Mar 18 2009 upstat.ksh
-rw-r--r-- 1 informix informix 361028 Aug 15 06:07 upstat.out
root@ifx01:/usr/apps/inf/bob/upstat #cat *.sql
--
-- run update statistics to re-compiler procedure;
-- Denny Guo @ Feb 09, 2007
--
database ip_0p@ipdb ;
update statistics for procedure ;
close database ;
--
-- run update statistics to optimize the database search
-- Denny Guo @ Feb 12, 2007
--
database ip_0p@ipdb ;
-- update for b3 table;
update statistics high for table b3(b3iid) ;
update statistics high for table b3(transno) ;
update statistics high for table b3(reldate) ;
update statistics high for table b3(approveddate) ;
update statistics high for table b3(createdate) ;
update statistics high for table b3(cargcntrlno) ;
update statistics high for table b3(custoff) ;
update statistics high for table b3(usportexit) ;
update statistics high for table b3(carriercode) ;
update statistics high for table b3(modetransp) ;
update statistics high for table b3(status) ;
update statistics high for table b3(liibrchno) ;
update statistics low for table b3(liibrchno,liirefno) ;
update statistics high for table b3(liiclientno) ;
update statistics low for table b3(liiclientno,liiaccountno) ;
-- update for status_history table;
update statistics high for table status_history(b3iid) ;
update statistics low for table status_history(b3iid,status) ;
-- Update for b3_subheader table;
update statistics high for table b3_subheader(b3subiid) ;
update statistics high for table b3_subheader(b3iid) ;
-- Update for b3_line table;
update statistics high for table b3_line(b3lineiid) ;
update statistics high for table b3_line(b3subiid) ;
update statistics high for table b3_line(hsno) ;
-- Update for b3_recap_details table;
update statistics high for table b3_recap_details(b3recapiid) ;
update statistics high for table b3_recap_details(b3lineiid) ;
update statistics high for table b3_recap_details(proddesc) ;
update statistics high for table b3_recap_details(detailponumber) ;
-- Update for tariff table;
update statistics high for table tariff(liiclientno) ;
update statistics high for table tariff(createdate) ;
update statistics high for table tariff(tariffcode) ;
update statistics high for table tariff(lastuseddate) ;
update statistics high for table tariff(tarifftrtmnt) ;
update statistics high for table tariff(hsno) ;
update statistics high for table tariff(remarks) ;
update statistics high for table tariff(moddate) ;
update statistics high for table tariff(b3description) ;
update statistics high for table tariff(productkeyword) ;
update statistics low for table tariff(liiclientno,createdate) ;
update statistics low for table tariff(liiclientno,lastuseddate) ;
update statistics low for table tariff(liiclientno,moddate) ;
update statistics low for table tariff(liiclientno,tarifftrtmnt) ;
update statistics low for table tariff(liiclientno,vendorname,productkeyword,productsufx) ;
-- Update for client_invoice table;
update statistics high for table client_invoice(liiclientno);
update statistics low for table
client_invoice(liiclientno,liiaccountno,liibrchno,liirefno,liireftext);
update statistics high for table client_invoice(itemtypecode);
update statistics high for table client_invoice(balance);
update statistics high for table client_invoice(itemstatus);
update statistics high for table client_invoice(itemdate);
update statistics high for table client_invoice(totduty);
update statistics high for table client_invoice(liiaccountno);
update statistics high for table client_invoice(liirefno);
update statistics high for table client_invoice(liibrchno);
-- Update for claim_log table;
update statistics high for table claim_log(claimlogiid);
update statistics high for table claim_log(b3acctsecurno);
update statistics low for table claim_log(b3acctsecurno,b3transno,b3transseqno);
update statistics high for table claim_log(b2brchno);
update statistics low for table claim_log(b2brchno,b2refno);
update statistics high for table claim_log(claimstatus);
update statistics high for table claim_log(claimcode);
update statistics high for table claim_log(claimvendorname);
update statistics high for table claim_log(b3transno);
update statistics high for table claim_log(claimrefno);
-- Update for as_accounted table;
update statistics high for table as_accounted(asacctiid);
update statistics high for table as_accounted(claimlogiid);
update statistics low for table as_accounted(claimlogiid,b2subhdrno,b3lineno,b2lineno);
update statistics high for table as_accounted(b3lineno);
update statistics high for table as_accounted(b2lineno);
update statistics high for table as_accounted(hsno);
-- Update for as_claimed table;
update statistics high for table as_claimed(asclaimediid);
update statistics high for table as_claimed(claimlogiid);
update statistics high for table as_claimed(b3lineno);
update statistics high for table as_claimed(b2lineno);
update statistics high for table as_claimed(hsno);
update statistics low for table as_claimed(claimlogiid,b2subhdrno,b3lineno,b2lineno);
-- Update for carrier table;
update statistics high for table carrier(carriercode);
-- Update for hs_uom table;
update statistics high for table hs_uom(hsno);
update statistics low for table hs_uom(hsno,effdate);
-- Update for hs_duty_rate table;
update statistics high for table hs_duty_rate(hsno);
update statistics low for table hs_duty_rate(hsno,hstarifftrtmnt,effdate);
-- Update for user_locus_xref table;
update statistics high for table user_locus_xref(userlocusxrefiid);
-- Update for tariff_code table;
update statistics high for table tariff_code(tariffcode);
update statistics low for table tariff_code(tariffcode,effdate,hstarifftrtmnt);
-- Update for lii_client table;
update statistics high for table lii_client(liiclientno);
-- Update for securuser table;
update statistics high for table securuser(useriid);
update statistics high for table securuser(username);
-- Update for lii_account table;
update statistics high for table lii_account(liiclientno);
update statistics low for table lii_account(liiclientno,liiaccountno);
-- Update for account_contact table;
update statistics high for table account_contact(liiclientno);
update statistics low for table account_contact(liiclientno,liiaccountno);
-- Update for b3b table;
update statistics high for table b3b(b3biid);
update statistics high for table b3b(b3iid);
-- Update for search_criteria table;
update statistics high for table search_criteria(useriid);
update statistics low for table search_criteria(useriid,liiclientno,liiaccountno);
-- Update for b3_line_comment table;
update statistics high for table b3_line_comment(b3linecommentiid);
update statistics high for table b3_line_comment(b3lineiid);
close database ;
-- Created by Denny Guo @ Jan 22, 2009
database ip_0p@ipdb;
update statistics medium for table systables;
update statistics medium for table syscolumns;
update statistics medium for table sysindices;
update statistics medium for table systabauth;
update statistics medium for table syscolauth;
update statistics medium for table sysviews;
update statistics medium for table sysusers;
update statistics medium for table sysdepend;
update statistics medium for table syssynonyms;
update statistics medium for table syssyntable;
update statistics medium for table sysconstraints;
update statistics medium for table sysreferences;
update statistics medium for table syschecks;
update statistics medium for table sysdefaults;
update statistics medium for table syscoldepend;
update statistics medium for table sysprocedures;
update statistics medium for table sysprocbody;
update statistics medium for table sysprocplan;
update statistics medium for table sysprocauth;
update statistics medium for table sysblobs;
update statistics medium for table sysopclstr;
update statistics medium for table systriggers;
update statistics medium for table systrigbody;
update statistics medium for table sysdistrib;
update statistics medium for table sysfragments;
update statistics medium for table sysobjstate;
update statistics medium for table sysviolations;
update statistics medium for table sysfragauth;
update statistics medium for table sysroleauth;
update statistics medium for table sysxtdtypes;
update statistics medium for table sysattrtypes;
update statistics medium for table sysxtddesc;
update statistics medium for table sysinherits;
update statistics medium for table syscolattribs;
update statistics medium for table syslogmap;
update statistics medium for table syscasts;
update statistics medium for table sysxtdtypeauth;
update statistics medium for table sysroutinelangs;
update statistics medium for table syslangauth;
update statistics medium for table sysams;
update statistics medium for table systabamdata;
update statistics medium for table sysopclasses;
update statistics medium for table syserrors;
update statistics medium for table systraceclasses;
update statistics medium for table systracemsgs;
update statistics medium for table sysaggregates;
update statistics medium for table syssequences;
update statistics medium for table sysdirectives;
update statistics medium for table sysxasourcetypes;
update statistics medium for table sysxadatasources;
update statistics medium for table sysseclabelcomponents;
update statistics medium for table sysseclabelcomponentelements;
update statistics medium for table syssecpolicies;
update statistics medium for table syssecpolicycomponents;
update statistics medium for table syssecpolicyexemptions;
update statistics medium for table sysseclabels;
update statistics medium for table sysseclabelnames;
update statistics medium for table sysseclabelauth;
update statistics medium for table syssurrogateauth;
update statistics medium for table sysproccolumns;
update statistics medium for table sysdomains;
update statistics medium for table sysindexes;
update statistics medium for table client_invoice ;
update statistics medium for table ctry_code ;
update statistics medium for table tservice ;
update statistics medium for table state_model ;
update statistics medium for table usport_exit ;
update statistics medium for table stringtable ;
update statistics medium for table claim_log ;
update statistics medium for table branch ;
update statistics medium for table carrier ;
update statistics medium for table hs_uom ;
update statistics medium for table hs_duty_rate ;
update statistics medium for table as_accounted ;
update statistics medium for table as_claimed ;
update statistics medium for table transp_mode ;
update statistics medium for table user_locus_xref ;
update statistics medium for table contact_type ;
update statistics medium for table canct_off ;
update statistics medium for table client ;
update statistics medium for table lii_contact ;
update statistics medium for table tariff_code ;
update statistics medium for table lii_client ;
update statistics medium for table securuser ;
update statistics medium for table lii_account ;
update statistics medium for table account_contact ;
update statistics medium for table b3b ;
update statistics medium for table search_criteria ;
update statistics medium for table b3_line_comment ;
update statistics medium for table terr ;
update statistics medium for table gst_rate_code ;
update statistics medium for table srch_crit_batch ;
update statistics medium for table bulletin ;
update statistics medium for table company ;
update statistics medium for table insight_pdq ;
update statistics medium for table bat_info ;
update statistics medium for table b3
(liiaccountno,
liirefno,
acctsecurno,
b3type,
k84date,
portunlading,
totb3duty,
totb3exctax,
totb3gst,
totb3sima,
totb3vfd,
weight,
purchaseorder1,
purchaseorder2,
shipvia,
locationofgoods,
containerno,
vendorname,
vendorstate,
vendorzip,
freight,
billoflading,
cargcntrlqty,
sbrnno,
ccnqty,
ccinumlines,
invoiceqty,
warehousenum,
entname,
entaddr1,
entaddr2,
entaddr3,
entaddr4,
entpostcd) resolution 1.0 0.99 ;
update statistics medium for table status_history
(status,
statusdate) resolution 1.0 0.99 ;
update statistics medium for table b3_subheader
(b3subno,
ctryorigin,
currcode,
placeexp,
shipdate,
tarifftrtmnt,
timelim,
timelimunit,
vendorname,
vendorstate,
vendorzip) resolution 1.0 0.99;
update statistics medium for table b3_line
(b3subiid,
b3lineno,
advaldutyrateumeas,
advalrate1,
convtoqty1,
convtoqty2,
convtoqty3,
excduty,
excdutyrateumeas,
excdutyrate,
exchgrate,
exctax,
exctaxrateumeas,
exctaxrate,
gst,
gstrate,
oicspecialaut,
partkeywrd,
partsufx,
partdesc,
simacode,
simaval,
spcdutyrateumeas,
spcrate,
tariffcode,
vfcc,
vfd,
vfdcode,
vft,
linecomment,
advalduty,
spcduty,
totalduty,
gstexemptcode,
exctaxexmptcode,
rulingnumber,
trqno,
prevtransno,
prevlineno) resolution 1.0 0.99 ;
update statistics medium for table b3_recap_details
(b3lineiid,
ccipageno,
ccilineno,
uom,
quantity,
amount,
percentsplit,
unitprice)
RESOLUTION 1.0 0.99 ;
update statistics medium for table tariff
(vendorname,
productsufx,
approvalcode,
b3refbrch,
b3refno,
createdate,
cooindicator,
cooexprydate,
exctaxlicind,
gstexemptcode,
gstratecode,
lastuseddate,
moddate,
moduser,
oic,
oicexprydate,
percentsplit,
placeexp,
remissno,
remissexprydate,
rulingno,
rulingexprydate,
specialinstruct,
tarifftrtmnt,
vfdcode,
exctaxrate,
exctaxamt,
exctaxunit,
exctaxdeduct,
exctaxdeductunit,
exctaxexmptcode,
projectcode,
businessunitcode,
materialclasscode,
countryorigin,
requirementid,
version,
ogdextension,
enduse,
miscellaneous,
regtype01) RESOLUTION 1.0 0.99 ;
close database ;
-- Created by Denny Guo @ Feb 12, 2007
database ip_0p@ipdb ;
update statistics medium for table systables ;
update statistics medium for table syscolumns ;
update statistics medium for table sysindexes ;
update statistics medium for table systabauth ;
update statistics medium for table syscolauth ;
update statistics medium for table sysviews ;
update statistics medium for table sysusers ;
update statistics medium for table sysdepend ;
update statistics medium for table sysconstraints ;
update statistics medium for table sysreferences ;
update statistics medium for table sysdefaults ;
update statistics medium for table syscoldepend ;
update statistics medium for table sysprocedures ;
update statistics medium for table sysprocbody ;
update statistics medium for table sysprocplan ;
update statistics medium for table sysprocauth ;
update statistics medium for table sysblobs ;
update statistics medium for table systriggers ;
update statistics medium for table systrigbody ;
update statistics medium for table sysdistrib ;
update statistics medium for table sysfragments ;
update statistics medium for table sysobjstate ;
update statistics medium for table client_invoice ;
update statistics medium for table ctry_code ;
update statistics medium for table tservice ;
update statistics medium for table state_model ;
update statistics medium for table usport_exit ;
update statistics medium for table stringtable ;
update statistics medium for table claim_log ;
update statistics medium for table branch ;
update statistics medium for table carrier ;
update statistics medium for table hs_uom ;
update statistics medium for table hs_duty_rate ;
update statistics medium for table as_accounted ;
update statistics medium for table as_claimed ;
update statistics medium for table transp_mode ;
update statistics medium for table user_locus_xref ;
update statistics medium for table contact_type ;
update statistics medium for table canct_off ;
update statistics medium for table client ;
update statistics medium for table lii_contact ;
update statistics medium for table tariff_code ;
update statistics medium for table lii_client ;
update statistics medium for table securuser ;
update statistics medium for table lii_account ;
update statistics medium for table account_contact ;
update statistics medium for table b3b ;
update statistics medium for table search_criteria ;
update statistics medium for table b3_line_comment ;
update statistics medium for table terr ;
update statistics medium for table gst_rate_code ;
update statistics medium for table srch_crit_batch ;
update statistics medium for table bulletin ;
update statistics medium for table company ;
update statistics medium for table insight_pdq ;
update statistics medium for table bat_info ;
update statistics medium for table b3
(liiaccountno,
liirefno,
acctsecurno,
b3type,
k84date,
portunlading,
totb3duty,
totb3exctax,
totb3gst,
totb3sima,
totb3vfd,
weight,
purchaseorder1,
purchaseorder2,
shipvia,
locationofgoods,
containerno,
vendorname,
vendorstate,
vendorzip,
freight,
billoflading,
cargcntrlqty,
sbrnno,
ccnqty,
ccinumlines,
invoiceqty,
warehousenum,
entname,
entaddr1,
entaddr2,
entaddr3,
entaddr4,
entpostcd) resolution 1.0 0.99 ;
update statistics medium for table status_history
(status,
statusdate) resolution 1.0 0.99 ;
update statistics medium for table b3_subheader
(b3subno,
ctryorigin,
currcode,
placeexp,
shipdate,
tarifftrtmnt,
timelim,
timelimunit,
vendorname,
vendorstate,
vendorzip) resolution 1.0 0.99;
update statistics medium for table b3_line
(b3subiid,
b3lineno,
advaldutyrateumeas,
advalrate1,
convtoqty1,
convtoqty2,
convtoqty3,
excduty,
excdutyrateumeas,
excdutyrate,
exchgrate,
exctax,
exctaxrateumeas,
exctaxrate,
gst,
gstrate,
oicspecialaut,
partkeywrd,
partsufx,
partdesc,
simacode,
simaval,
spcdutyrateumeas,
spcrate,
tariffcode,
vfcc,
vfd,
vfdcode,
vft,
linecomment,
advalduty,
spcduty,
totalduty,
gstexemptcode,
exctaxexmptcode,
rulingnumber,
trqno,
prevtransno,
prevlineno) resolution 1.0 0.99 ;
update statistics medium for table b3_recap_details
(b3lineiid,
ccipageno,
ccilineno,
uom,
quantity,
amount,
percentsplit,
unitprice)
RESOLUTION 1.0 0.99 ;
update statistics medium for table tariff
(vendorname,
productsufx,
approvalcode,
b3refbrch,
b3refno,
createdate,
cooindicator,
cooexprydate,
exctaxlicind,
gstexemptcode,
gstratecode,
lastuseddate,
moddate,
moduser,
oic,
oicexprydate,
percentsplit,
placeexp,
remissno,
remissexprydate,
rulingno,
rulingexprydate,
specialinstruct,
tarifftrtmnt,
vfdcode,
exctaxrate,
exctaxamt,
exctaxunit,
exctaxdeduct,
exctaxdeductunit,
exctaxexmptcode,
projectcode,
businessunitcode,
materialclasscode,
countryorigin,
requirementid,
version,
ogdextension,
enduse,
miscellaneous,
regtype01) RESOLUTION 1.0 0.99 ;
close database ;
-- Created by Denny Guo @ Feb 12, 2007
database ip_0p@ipdb ;
update statistics medium for table systables ;
update statistics medium for table syscolumns ;
update statistics medium for table sysindexes ;
update statistics medium for table systabauth ;
update statistics medium for table syscolauth ;
update statistics medium for table sysviews ;
update statistics medium for table sysusers ;
update statistics medium for table sysdepend ;
update statistics medium for table sysconstraints ;
update statistics medium for table sysreferences ;
update statistics medium for table sysdefaults ;
update statistics medium for table syscoldepend ;
update statistics medium for table sysprocedures ;
update statistics medium for table sysprocbody ;
update statistics medium for table sysprocplan ;
update statistics medium for table sysprocauth ;
update statistics medium for table sysblobs ;
update statistics medium for table systriggers ;
update statistics medium for table systrigbody ;
update statistics medium for table sysdistrib ;
update statistics medium for table sysfragments ;
update statistics medium for table sysobjstate ;
update statistics medium for table client_invoice ;
update statistics medium for table ctry_code ;
update statistics medium for table tservice ;
update statistics medium for table state_model ;
update statistics medium for table usport_exit ;
update statistics medium for table stringtable ;
update statistics medium for table claim_log ;
update statistics medium for table branch ;
update statistics medium for table carrier ;
update statistics medium for table hs_uom ;
update statistics medium for table hs_duty_rate ;
update statistics medium for table as_accounted ;
update statistics medium for table as_claimed ;
update statistics medium for table transp_mode ;
update statistics medium for table user_locus_xref ;
update statistics medium for table contact_type ;
update statistics medium for table canct_off ;
update statistics medium for table client ;
update statistics medium for table lii_contact ;
update statistics medium for table tariff_code ;
update statistics medium for table lii_client ;
update statistics medium for table securuser ;
update statistics medium for table lii_account ;
update statistics medium for table account_contact ;
update statistics medium for table b3b ;
update statistics medium for table search_criteria ;
update statistics medium for table b3_line_comment ;
update statistics medium for table terr ;
update statistics medium for table gst_rate_code ;
update statistics medium for table srch_crit_batch ;
update statistics medium for table bulletin ;
update statistics medium for table company ;
update statistics medium for table insight_pdq ;
update statistics medium for table bat_info ;
update statistics medium for table b3;
update statistics medium for table status_history;
update statistics medium for table b3_subheader;
update statistics medium for table b3_line;
update statistics medium for table b3_recap_details;
update statistics medium for table tariff;
close database ;
Practice One: Migrate informix from Product server to test server and
Configuring continuous log restore with ontape
1. On ifx01, create informix db full backup to local directory:
$ touch /insight_db_dr/ipdb_level_0
$ chown informix:informix /insight_db_dr/ipdb_level_0
$ chmod 666 /insight_db_dr/ipdb_level_0
Tips: you should touch a file /insight_db_dr/ipdb_level_0 with mod 666 to informix
$ ontape -s -L 0 -t /insight_db_dr/ipdb_level_0
$ cd /usr/apps; tar -cvf /insight_db_dr/inf.tar ./inf
2. export /insight_db_dr for cm07 to share
# export -i /insight_db_dr
3. on cm07, Create user: informix, and group informix:
#mksuer informix
#mkgroup informix
Tips: you can modify user and group attribution using chuser/chgroup, like home directory and shell
something. the primary group of informix must be informix
root@cm07# chlv -x 1024 restorelv
root@cm07# lslv restorelv
LOGICAL VOLUME: restorelv VOLUME GROUP: admtsmvg
LV IDENTIFIER: 00033baa00004c000000011a1720ca22.5 PERMISSION: read/write
VG STATE: active/complete LV STATE: opened/syncd
TYPE: jfs2 WRITE VERIFY: off
MAX LPs: 1024 PP SIZE: 256 megabyte(s)
COPIES: 1 SCHED POLICY: parallel
LPs: 512 PPs: 512
STALE PPs: 0 BB POLICY: relocatable
INTER-POLICY: minimum RELOCATABLE: yes
INTRA-POLICY: middle UPPER BOUND: 32
MOUNT POINT: /restore LABEL: /restore
MIRROR WRITE CONSISTENCY: on/ACTIVE
EACH LP COPY ON A SEPARATE PV ?: yes
Serialize IO ?: NO
root@cm07# chfs -a size=160G /restore
Filesystem size changed to 335544320
root@cm07# cat crchunk
mkdir -p /restore/informix/ix_root
mkdir -p /restore/informix/ix_llog
mkdir -p /restore/informix/ix_plog
mkdir -p /restore/informix/ix_dat1
mkdir -p /restore/informix/ix_dat2
mkdir -p /restore/informix/ix_idx1
mkdir -p /restore/informix/ix_idx2
mkdir -p /restore/informix/ix_temp
mkdir -p /restore/informix/ix_idx2
mkdir -p /restore/informix/ix_dat3
mkdir -p /restore/informix/ix_idx3
touch /restore/informix/ix_root/ix_root.1
touch /restore/informix/ix_llog/ix_llog.1
touch /restore/informix/ix_plog/ix_plog.1
touch /restore/informix/ix_dat1/ix_dat1.1
touch /restore/informix/ix_dat1/ix_dat1.2
touch /restore/informix/ix_dat1/ix_dat1.3
touch /restore/informix/ix_dat1/ix_dat1.4
touch /restore/informix/ix_dat1/ix_dat1.5
touch /restore/informix/ix_dat1/ix_dat1.6
touch /restore/informix/ix_dat1/ix_dat1.7
touch /restore/informix/ix_dat1/ix_dat1.8
touch /restore/informix/ix_dat1/ix_dat1.9
touch /restore/informix/ix_dat1/ix_dat1.10
touch /restore/informix/ix_dat1/ix_dat1.11
touch /restore/informix/ix_dat1/ix_dat1.12
touch /restore/informix/ix_dat1/ix_dat1.13
touch /restore/informix/ix_dat1/ix_dat1.14
touch /restore/informix/ix_dat1/ix_dat1.15
touch /restore/informix/ix_dat2/ix_dat2.1
touch /restore/informix/ix_dat2/ix_dat2.2
touch /restore/informix/ix_dat2/ix_dat2.3
touch /restore/informix/ix_dat2/ix_dat2.4
touch /restore/informix/ix_dat2/ix_dat2.5
touch /restore/informix/ix_dat2/ix_dat2.6
touch /restore/informix/ix_dat2/ix_dat2.7
touch /restore/informix/ix_dat2/ix_dat2.8
touch /restore/informix/ix_dat2/ix_dat2.9
touch /restore/informix/ix_dat2/ix_dat2.10
touch /restore/informix/ix_dat2/ix_dat2.11
touch /restore/informix/ix_dat2/ix_dat2.12
touch /restore/informix/ix_dat2/ix_dat2.13
touch /restore/informix/ix_dat2/ix_dat2.14
touch /restore/informix/ix_dat2/ix_dat2.15
touch /restore/informix/ix_idx1/ix_idx1.1
touch /restore/informix/ix_idx1/ix_idx1.2
touch /restore/informix/ix_idx1/ix_idx1.3
touch /restore/informix/ix_idx2/ix_idx2.1
touch /restore/informix/ix_idx2/ix_idx2.2
touch /restore/informix/ix_idx2/ix_idx2.3
touch /restore/informix/ix_temp/ix_temp.1
touch /restore/informix/ix_temp/ix_temp.2
touch /restore/informix/ix_temp/ix_temp.3
touch /restore/informix/ix_idx2/ix_idx2.4
touch /restore/informix/ix_dat2/ix_dat2.16
touch /restore/informix/ix_idx1/ix_idx1.4
touch /restore/informix/ix_dat2/ix_dat2.17
touch /restore/informix/ix_dat2/ix_dat2.18
touch /restore/informix/ix_idx1/ix_idx1.5
touch /restore/informix/ix_idx2/ix_idx2.5
touch /restore/informix/ix_dat2/ix_dat2.19
touch /restore/informix/ix_idx1/ix_idx1.6
touch /restore/informix/ix_dat2/ix_dat2.20
touch /restore/informix/ix_dat2/ix_dat2.21
touch /restore/informix/ix_dat2/ix_dat2.22
touch /restore/informix/ix_dat2/ix_dat2.23
touch /restore/informix/ix_idx1/ix_idx1.7
touch /restore/informix/ix_dat2/ix_dat2.24
touch /restore/informix/ix_dat2/ix_dat2.25
touch /restore/informix/ix_dat3/ix_dat3.1
touch /restore/informix/ix_dat3/ix_dat3.2
touch /restore/informix/ix_dat3/ix_dat3.3
touch /restore/informix/ix_dat3/ix_dat3.4
touch /restore/informix/ix_dat3/ix_dat3.5
touch /restore/informix/ix_dat3/ix_dat3.6
touch /restore/informix/ix_dat3/ix_dat3.7
touch /restore/informix/ix_dat3/ix_dat3.8
touch /restore/informix/ix_dat3/ix_dat3.9
touch /restore/informix/ix_dat3/ix_dat3.10
touch /restore/informix/ix_dat3/ix_dat3.11
touch /restore/informix/ix_dat3/ix_dat3.12
touch /restore/informix/ix_dat3/ix_dat3.13
touch /restore/informix/ix_dat3/ix_dat3.14
touch /restore/informix/ix_dat3/ix_dat3.15
touch /restore/informix/ix_dat3/ix_dat3.16
touch /restore/informix/ix_idx3/ix_idx3.1
touch /restore/informix/ix_idx3/ix_idx3.2
touch /restore/informix/ix_idx3/ix_idx3.3
touch /restore/informix/ix_temp/ix_temp.4
touch /restore/informix/ix_dat3/ix_dat3.17
touch /restore/informix/ix_dat3/ix_dat3.18
touch /restore/informix/ix_dat1/ix_dat1.16
touch /restore/informix/ix_dat1/ix_dat1.17
touch /restore/informix/ix_dat1/ix_dat1.18
touch /restore/informix/ix_dat3/ix_dat3.19
touch /restore/informix/ix_idx3/ix_idx3.4
touch /restore/informix/ix_dat1/ix_dat1.19
touch /restore/informix/ix_dat1/ix_dat1.20
touch /restore/informix/ix_dat1/ix_dat1.21
touch /restore/informix/ix_dat1/ix_dat1.22
touch /restore/informix/ix_dat3/ix_dat3.20
touch /restore/informix/ix_dat1/ix_dat1.23
# cd /restore/informix; chmod -R 660 ix*
# chmod 755 ix*
root@cm07# cat renchunk
/ix_root/ix_root.1 0 /restore/informix/ix_root/ix_root.1 0
/ix_llog/ix_llog.1 0 /restore/informix/ix_llog/ix_llog.1 0
/ix_plog/ix_plog.1 0 /restore/informix/ix_plog/ix_plog.1 0
/ix_dat1/ix_dat1.1 0 /restore/informix/ix_dat1/ix_dat1.1 0
/ix_dat1/ix_dat1.2 0 /restore/informix/ix_dat1/ix_dat1.2 0
/ix_dat1/ix_dat1.3 0 /restore/informix/ix_dat1/ix_dat1.3 0
/ix_dat1/ix_dat1.4 0 /restore/informix/ix_dat1/ix_dat1.4 0
/ix_dat1/ix_dat1.5 0 /restore/informix/ix_dat1/ix_dat1.5 0
/ix_dat1/ix_dat1.6 0 /restore/informix/ix_dat1/ix_dat1.6 0
/ix_dat1/ix_dat1.7 0 /restore/informix/ix_dat1/ix_dat1.7 0
/ix_dat1/ix_dat1.8 0 /restore/informix/ix_dat1/ix_dat1.8 0
/ix_dat1/ix_dat1.9 0 /restore/informix/ix_dat1/ix_dat1.9 0
/ix_dat1/ix_dat1.10 0 /restore/informix/ix_dat1/ix_dat1.10 0
/ix_dat1/ix_dat1.11 0 /restore/informix/ix_dat1/ix_dat1.11 0
/ix_dat1/ix_dat1.12 0 /restore/informix/ix_dat1/ix_dat1.12 0
/ix_dat1/ix_dat1.13 0 /restore/informix/ix_dat1/ix_dat1.13 0
/ix_dat1/ix_dat1.14 0 /restore/informix/ix_dat1/ix_dat1.14 0
/ix_dat1/ix_dat1.15 0 /restore/informix/ix_dat1/ix_dat1.15 0
/ix_dat2/ix_dat2.1 0 /restore/informix/ix_dat2/ix_dat2.1 0
/ix_dat2/ix_dat2.2 0 /restore/informix/ix_dat2/ix_dat2.2 0
/ix_dat2/ix_dat2.3 0 /restore/informix/ix_dat2/ix_dat2.3 0
/ix_dat2/ix_dat2.4 0 /restore/informix/ix_dat2/ix_dat2.4 0
/ix_dat2/ix_dat2.5 0 /restore/informix/ix_dat2/ix_dat2.5 0
/ix_dat2/ix_dat2.6 0 /restore/informix/ix_dat2/ix_dat2.6 0
/ix_dat2/ix_dat2.7 0 /restore/informix/ix_dat2/ix_dat2.7 0
/ix_dat2/ix_dat2.8 0 /restore/informix/ix_dat2/ix_dat2.8 0
/ix_dat2/ix_dat2.9 0 /restore/informix/ix_dat2/ix_dat2.9 0
/ix_dat2/ix_dat2.10 0 /restore/informix/ix_dat2/ix_dat2.10 0
/ix_dat2/ix_dat2.11 0 /restore/informix/ix_dat2/ix_dat2.11 0
/ix_dat2/ix_dat2.12 0 /restore/informix/ix_dat2/ix_dat2.12 0
/ix_dat2/ix_dat2.13 0 /restore/informix/ix_dat2/ix_dat2.13 0
/ix_dat2/ix_dat2.14 0 /restore/informix/ix_dat2/ix_dat2.14 0
/ix_dat2/ix_dat2.15 0 /restore/informix/ix_dat2/ix_dat2.15 0
/ix_idx1/ix_idx1.1 0 /restore/informix/ix_idx1/ix_idx1.1 0
/ix_idx1/ix_idx1.2 0 /restore/informix/ix_idx1/ix_idx1.2 0
/ix_idx1/ix_idx1.3 0 /restore/informix/ix_idx1/ix_idx1.3 0
/ix_idx2/ix_idx2.1 0 /restore/informix/ix_idx2/ix_idx2.1 0
/ix_idx2/ix_idx2.2 0 /restore/informix/ix_idx2/ix_idx2.2 0
/ix_idx2/ix_idx2.3 0 /restore/informix/ix_idx2/ix_idx2.3 0
/ix_temp/ix_temp.1 0 /restore/informix/ix_temp/ix_temp.1 0
/ix_temp/ix_temp.2 0 /restore/informix/ix_temp/ix_temp.2 0
/ix_temp/ix_temp.3 0 /restore/informix/ix_temp/ix_temp.3 0
/ix_idx2/ix_idx2.4 0 /restore/informix/ix_idx2/ix_idx2.4 0
/ix_dat2/ix_dat2.16 0 /restore/informix/ix_dat2/ix_dat2.16 0
/ix_idx1/ix_idx1.4 0 /restore/informix/ix_idx1/ix_idx1.4 0
/ix_dat2/ix_dat2.17 0 /restore/informix/ix_dat2/ix_dat2.17 0
/ix_dat2/ix_dat2.18 0 /restore/informix/ix_dat2/ix_dat2.18 0
/ix_idx1/ix_idx1.5 0 /restore/informix/ix_idx1/ix_idx1.5 0
/ix_idx2/ix_idx2.5 0 /restore/informix/ix_idx2/ix_idx2.5 0
/ix_dat2/ix_dat2.19 0 /restore/informix/ix_dat2/ix_dat2.19 0
/ix_idx1/ix_idx1.6 0 /restore/informix/ix_idx1/ix_idx1.6 0
/ix_dat2/ix_dat2.20 0 /restore/informix/ix_dat2/ix_dat2.20 0
/ix_dat2/ix_dat2.21 0 /restore/informix/ix_dat2/ix_dat2.21 0
/ix_dat2/ix_dat2.22 0 /restore/informix/ix_dat2/ix_dat2.22 0
/ix_dat2/ix_dat2.23 0 /restore/informix/ix_dat2/ix_dat2.23 0
/ix_idx1/ix_idx1.7 0 /restore/informix/ix_idx1/ix_idx1.7 0
/ix_dat2/ix_dat2.24 0 /restore/informix/ix_dat2/ix_dat2.24 0
/ix_dat2/ix_dat2.25 0 /restore/informix/ix_dat2/ix_dat2.25 0
/ix_dat3/ix_dat3.1 0 /restore/informix/ix_dat3/ix_dat3.1 0
/ix_dat3/ix_dat3.2 0 /restore/informix/ix_dat3/ix_dat3.2 0
/ix_dat3/ix_dat3.3 0 /restore/informix/ix_dat3/ix_dat3.3 0
/ix_dat3/ix_dat3.4 0 /restore/informix/ix_dat3/ix_dat3.4 0
/ix_dat3/ix_dat3.5 0 /restore/informix/ix_dat3/ix_dat3.5 0
/ix_dat3/ix_dat3.6 0 /restore/informix/ix_dat3/ix_dat3.6 0
/ix_dat3/ix_dat3.7 0 /restore/informix/ix_dat3/ix_dat3.7 0
/ix_dat3/ix_dat3.8 0 /restore/informix/ix_dat3/ix_dat3.8 0
/ix_dat3/ix_dat3.9 0 /restore/informix/ix_dat3/ix_dat3.9 0
/ix_dat3/ix_dat3.10 0 /restore/informix/ix_dat3/ix_dat3.10 0
/ix_dat3/ix_dat3.11 0 /restore/informix/ix_dat3/ix_dat3.11 0
/ix_dat3/ix_dat3.12 0 /restore/informix/ix_dat3/ix_dat3.12 0
/ix_dat3/ix_dat3.13 0 /restore/informix/ix_dat3/ix_dat3.13 0
/ix_dat3/ix_dat3.14 0 /restore/informix/ix_dat3/ix_dat3.14 0
/ix_dat3/ix_dat3.15 0 /restore/informix/ix_dat3/ix_dat3.15 0
/ix_dat3/ix_dat3.16 0 /restore/informix/ix_dat3/ix_dat3.16 0
/ix_idx3/ix_idx3.1 0 /restore/informix/ix_idx3/ix_idx3.1 0
/ix_idx3/ix_idx3.2 0 /restore/informix/ix_idx3/ix_idx3.2 0
/ix_idx3/ix_idx3.3 0 /restore/informix/ix_idx3/ix_idx3.3 0
/ix_temp/ix_temp.4 0 /restore/informix/ix_temp/ix_temp.4 0
/ix_dat3/ix_dat3.17 0 /restore/informix/ix_dat3/ix_dat3.17 0
/ix_dat3/ix_dat3.18 0 /restore/informix/ix_dat3/ix_dat3.18 0
/ix_dat1/ix_dat1.16 0 /restore/informix/ix_dat1/ix_dat1.16 0
/ix_dat1/ix_dat1.17 0 /restore/informix/ix_dat1/ix_dat1.17 0
/ix_dat1/ix_dat1.18 0 /restore/informix/ix_dat1/ix_dat1.18 0
/ix_dat3/ix_dat3.19 0 /restore/informix/ix_dat3/ix_dat3.19 0
/ix_idx3/ix_idx3.4 0 /restore/informix/ix_idx3/ix_idx3.4 0
/ix_dat1/ix_dat1.19 0 /restore/informix/ix_dat1/ix_dat1.19 0
/ix_dat1/ix_dat1.20 0 /restore/informix/ix_dat1/ix_dat1.20 0
/ix_dat1/ix_dat1.21 0 /restore/informix/ix_dat1/ix_dat1.21 0
/ix_dat1/ix_dat1.22 0 /restore/informix/ix_dat1/ix_dat1.22 0
/ix_dat3/ix_dat3.20 0 /restore/informix/ix_dat3/ix_dat3.20 0
/ix_dat1/ix_dat1.23 0 /restore/informix/ix_dat1/ix_dat1.23 0
4. On cm07, mount filesystem shared by ifx01:
#mount ifx01:/insight_db_dr /restore/idsbkup
# cd /restore/informix; tar -xvf inf.tar
5. Localize informix ids environment on cm07
# su - informix
$ env
_=/usr/bin/env
LANG=en_US
LOGIN=informix
PATH=/restore/informix/inf/ver115UC3/bin:/usr/bin:/etc:/usr/sbin:/usr/ucb:/usr/bin/X1
1:/sbin:/usr/java14/jre/bin:/usr/java14/bin
INFBKUP=/restore/informix/bkup
INFINC=/restore/informix/inf/ver115UC3/incl
INFXCPUVPPRIORITY=90
LC__FASTMSG=true
INFPLATFORM=IBMAIX
LOGNAME=informix
MAIL=/usr/spool/mail/informix
LOCPATH=/usr/lib/nls/loc
INFVERSION=ver115UC3
TERMCAP=/restore/informix/inf/ver115UC3/etc/termcap
INFLOGDIR=/restore/informix/log
USER=informix
AUTHSTATE=files
INFXIOVPPRIORITY=90
INFROOT=/usr/apps/inf
DEFAULT_BROWSER=/usr/bin/mozilla
SHELL=/usr/bin/ksh
ODMDIR=/etc/objrepos
INFORMIXTERM=termcap
HOME=/restore/informix
INFORMIXDIR=/restore/informix/inf/ver115UC3
INFBIN=/restore/informix/inf/ver115UC3/bin
TERM=xterm
MAILMSG=[YOU HAVE NEW MAIL]
ONCONFIG=onconfig_ipdb
INFXNETVPPRIORITY=90
INFLIB=/usr/apps/inf/ver115UC3/lib
PWD=/restore/informix
TZ=EST5EDT,M3.2.5,M11.1.0
ARC_CONFIG=onarconfig_ipdb
INFXMSCVPPRIORITY=90
SYSROOT=/usr/apps
INFORMIXSERVER=ipdb
A__z=! LOGNAME
NLSPATH=/usr/lib/nls/msg/%L/%N:/usr/lib/nls/msg/%L/%N.cat
6. Configure following 3 files on cm07, changes log files something to cm07 new environment in
onconfig_ipdb, and change hostname in sqlhosts from ifx01 to cm07, add port 6800/tcp in
/etc/services on cm07.
root@cm07# cat /restore/informix/inf/ver115*/etc/sqlhosts
#**************************************************************************
#
# Licensed Material - Property Of IBM
#
# "Restricted Materials of IBM"
#
# IBM Informix Dynamic Server
# (c) Copyright IBM Corporation 1996, 2004 All rights reserved.
#
# Title: sqlhosts.demo
# Description:
# Default sqlhosts file for running demos.
#
#**************************************************************************
# IANA (www.iana.org) assigned port number/service names for Informix:
# sqlexec 9088/tcp
# sqlexec-ssl 9089/tcp
#demo_on onipcshm on_hostname on_servername
#demo_se seipcpip se_hostname sqlexec
#IFX database;
ipdb onsoctcp cm07 ipdbsvc
ardb onsoctcp cm07 ardbsvc
systestdb onsoctcp ipdev systestdbsvc
root@cm07# cat /restore/informix/inf/ver115*/etc/onconfig_ipdb
###################################################################
# Licensed Material - Property Of IBM
#
# "Restricted Materials of IBM"
#
# IBM Informix Dynamic Server
# Copyright IBM Corporation 1996, 2008 All rights reserved.
#
# Title: onconfig.std
# Description: IBM Informix Dynamic Server Configuration Parameters
#
# Important: $INFORMIXDIR now resolves to the environment
# variable INFORMIXDIR. Replace the value of the INFORMIXDIR
# environment variable only if the path you want is not under
# $INFORMIXDIR.
#
# For additional information on the parameters:
# http://publib.boulder.ibm.com/infocenter/idshelp/v115/index.jsp
###################################################################
###################################################################
# Root Dbspace Configuration Parameters
###################################################################
# ROOTNAME - The root dbspace name to contain reserved pages and
# internal tracking tables.
# ROOTPATH - The path for the device containing the root dbspace
# ROOTOFFSET - The offset, in KB, of the root dbspace into the
# device. The offset is required for some raw devices.
# ROOTSIZE - The size of the root dbspace, in KB. The value of
# 200000 allows for a default user space of about
# 100 MB and the default system space requirements.
# MIRROR - Enable (1) or disable (0) mirroring
# MIRRORPATH - The path for the device containing the mirrored
# root dbspace
# MIRROROFFSET - The offset, in KB, into the mirrored device
#
# Warning: Always verify ROOTPATH before performing
# disk initialization (oninit -i or -iy) to
# avoid disk corruption of another instance
###################################################################
ROOTNAME rootdbs
ROOTPATH /ix_root/ix_root.1
ROOTOFFSET 0
ROOTSIZE 220000
MIRROR 0
#MIRRORPATH $INFORMIXDIR/tmp/demo_on.root_mirror
MIRRORPATH
MIRROROFFSET 0
###################################################################
# Physical Log Configuration Parameters
###################################################################
# PHYSFILE - The size, in KB, of the physical log on disk.
# If RTO_SERVER_RESTART is enabled, the
# suggested formula for the size of PHSYFILE
# (up to about 1 GB) is:
# PHYSFILE = Size of BUFFERS * 1.1
# PLOG_OVERFLOW_PATH - The directory for extra physical log files
# if the physical log overflows during recovery
# or long transaction rollback
# PHYSBUFF - The size of the physical log buffer, in KB
###################################################################
PHYSFILE 250000
#PLOG_OVERFLOW_PATH $INFORMIXDIR/tmp
PLOG_OVERFLOW_PATH
PHYSBUFF 128
###################################################################
# Logical Log Configuration Parameters
###################################################################
# LOGFILES - The number of logical log files
# LOGSIZE - The size of each logical log, in KB
# DYNAMIC_LOGS - The type of dynamic log allocation.
# Acceptable values are:
# 2 Automatic. IDS adds a new logical log to the
# root dbspace when necessary.
# 1 Manual. IDS notifies the DBA to add new logical
# logs when necessary.
# 0 Disabled
# LOGBUFF - The size of the logical log buffer, in KB
###################################################################
LOGFILES 72
LOGSIZE 10000
DYNAMIC_LOGS 1
LOGBUFF 64
###################################################################
# Long Transaction Configuration Parameters
###################################################################
# If IDS cannot roll back a long transaction, the server hangs
# until more disk space is available.
#
# LTXHWM - The percentage of the logical logs that can be
# filled before a transaction is determined to be a
# long transaction and is rolled back
# LTXEHWM - The percentage of the logical logs that have been
# filled before the server suspends all other
# transactions so that the long transaction being
# rolled back has exclusive use of the logs
#
# When dynamic logging is on, you can set higher values for
# LTXHWM and LTXEHWM because the server can add new logical logs
# during long transaction rollback. Set lower values to limit the
# number of new logical logs added.
#
# If dynamic logging is off, set LTXHWM and LTXEHWM to
# lower values, such as 50 and 60 or lower, to prevent long
# transaction rollback from hanging the server due to lack of
# logical log space.
#
# When using Enterprise Replication, set LTXEHWM to at least 30%
# higher than LTXHWM to minimize log overruns.
###################################################################
LTXHWM 50
LTXEHWM 60
###################################################################
# Server Message File Configuration Parameters
###################################################################
# MSGPATH - The path of the IDS message log file
# CONSOLE - The path of the IDS console message file
###################################################################
MSGPATH /restore/informix/log/ipdb/online.log
MSG_DATE 1
CONSOLE /restore/informix/log/ipdb/online.con
###################################################################
# Tblspace Configuration Parameters
###################################################################
# TBLTBLFIRST - The first extent size, in KB, for the tblspace
# tblspace. Must be in multiples of the page size.
# TBLTBLNEXT - The next extent size, in KB, for the tblspace
# tblspace. Must be in multiples of the page size.
# The default setting for both is 0, which allows IDS to manage
# extent sizes automatically.
#
# TBLSPACE_STATS - Enables (1) or disables (0) IDS to maintain
# tblspace statistics
##################################################################
TBLTBLFIRST 0
TBLTBLNEXT 0
TBLSPACE_STATS 1
###################################################################
# Temporary dbspace and sbspace Configuration Parameters
###################################################################
# DBSPACETEMP - The list of dbspaces used to store temporary
# tables and other objects. Specify a colon
# separated list of dbspaces that exist when the
# server is started. If no dbspaces are specified,
# or if all specified dbspaces are not valid,
# temporary files are created in the /tmp directory
# instead.
# SBSPACETEMP - The list of sbspaces used to store temporary
# tables for smart large objects. If no sbspace
# is specified, temporary files are created in
# a standard sbspace.
###################################################################
DBSPACETEMP tempdbs1,tempdbs2,tempdbs3,tempdbs4
SBSPACETEMP
###################################################################
# Dbspace and sbspace Configuration Parameters
###################################################################
# SBSPACENAME - The default sbspace name where smart large objects
# are stored if no sbspace is specified during
# smart large object creation. Some DataBlade
# modules store smart large objects in this
# location.
# SYSSBSPACENAME - The default sbspace for system statistics
# collection. Otherwise, IDS stores statistics
# in the sysdistrib system catalog table.
# ONDBSPACEDOWN - Specifies how IDS behaves when it encounters a
# dbspace that is offline. Acceptable values
# are:
# 0 Continue
# 1 Stop
# 2 Wait for DBA action
###################################################################
SBSPACENAME
SYSSBSPACENAME
ONDBSPACEDOWN 2
###################################################################
# System Configuration Parameters
###################################################################
# SERVERNUM - The unique ID for the IDS instance. Acceptable
# values are 0 through 255, inclusive.
# DBSERVERNAME - The name of the default database server
# DBSERVERALIASES - The list of up to 32 alternative dbservernames,
# separated by commas
###################################################################
SERVERNUM 10
DBSERVERNAME ipdb
DBSERVERALIASES
###################################################################
# Network Configuration Parameters
###################################################################
# NETTYPE - The configuration of poll threads
# for a specific protocol. The
# format is:
# NETTYPE <protocol>,<# poll threads>
# ,<number of connections/thread>
# ,(NET|CPU)
# You can include multiple NETTYPE
# entries for multiple protocols.
# LISTEN_TIMEOUT - The number of seconds that IDS
# waits for a connection
# MAX_INCOMPLETE_CONNECTIONS - The maximum number of incomplete
# connections before IDS logs a Denial
# of Service (DoS) error
# FASTPOLL - Enables (1) or disables (0) fast
# polling of your network, if your
# operating system supports it.
###################################################################
NETTYPE soctcp,2,100,CPU
LISTEN_TIMEOUT 60
MAX_INCOMPLETE_CONNECTIONS 1024
FASTPOLL 1
###################################################################
# CPU-Related Configuration Parameters
###################################################################
# MULTIPROCESSOR - Specifies whether the computer has multiple
# CPUs. Acceptable values are: 0 (single
# processor), 1 (multiple processors or
# multi-core chips)
# VPCLASS cpu - Configures the CPU VPs. The format is:
# VPCLASS cpu,num=<#>[,max=<#>][,aff=<#>]
# [,noage]
# VP_MEMORY_CACHE_KB - Specifies the amount of private memory
# blocks of your CPU VP, in KB, that the
# database server can access.
# Acceptable values are:
# 0 (disable)
# 800 through 40% of the value of SHMTOTAL
# SINGLE_CPU_VP - Optimizes performance if IDS runs with
# only one CPU VP. Acceptable values are:
# 0 multiple CPU VPs
# Any nonzero value (optimize for one CPU VP)
###################################################################
MULTIPROCESSOR 1
#VPCLASS cpu,num=6,noage
VPCLASS cpu,num=8,noage
VP_MEMORY_CACHE_KB 0
SINGLE_CPU_VP 0
###################################################################
# AIO and Cleaner-Related Configuration Parameters
###################################################################
# VPCLASS aio - Configures the AIO VPs. The format is:
# VPCLASS aio,num=<#>[,max=<#>][,aff=<#>][,noage]
# CLEANERS - The number of page cleaner threads
# AUTO_AIOVPS - Enables (1) or disables (0) automatic management
# of AIO VPs
# DIRECT_IO - Enables (1) or disables (0) direct I/O for chunks
###################################################################
#VPCLASS aio,num=6
VPCLASS aio,num=36
CLEANERS 16
AUTO_AIOVPS 1
DIRECT_IO 0
#DIRECT_IO 1
###################################################################
# Lock-Related Configuration Parameters
###################################################################
# LOCKS - The initial number of locks when IDS starts.
# Dynamic locking can add extra locks if needed.
# DEF_TABLE_LOCKMODE - The default table lock mode for new tables.
# Acceptable values are ROW and PAGE (default).
###################################################################
LOCKS 3000000
DEF_TABLE_LOCKMODE ROW
###################################################################
# Shared Memory Configuration Parameters
###################################################################
# RESIDENT - Controls whether shared memory is resident.
# Acceptable values are:
# 0 off (default)
# 1 lock the resident segment only
# n lock the resident segment and the next n-1
# virtual segments, where n < 100
# -1 lock all resident and virtual segments
# SHMBASE - The shared memory base address; do not change
# SHMVIRTSIZE - The initial size, in KB, of the virtual
# segment of shared memory
# SHMADD - The size, in KB, of additional virtual shared
# memory segments
# EXTSHMADD - The size, in KB, of each extension shared
# memory segment
# SHMTOTAL - The maximum amount of shared memory for IDS,
# in KB. A 0 indicates no specific limit.
# SHMVIRT_ALLOCSEG - Controls when IDS adds a memory segment and
# the alarm level if the memory segment cannot
# be added.
# For the first field, acceptable values are:
# - 0 Disabled
# - A decimal number indicating the percentage
# of memory used before a segment is added
# - The number of KB remaining when a segment
# is added
# For the second field, specify an alarm level
# from 1 (non-event) to 5 (fatal error).
# SHMNOACCESS - A list of up to 10 memory address ranges
# that IDS cannot use to attach shared memory.
# Each address range is the start and end memory
# address in hex format, separated by a hyphen.
# Use a comma to separate each range in the list.
###################################################################
RESIDENT 0
SHMBASE 0x30000000L
SHMVIRTSIZE 500000
SHMADD 100000
EXTSHMADD 100000
SHMTOTAL 0
SHMVIRT_ALLOCSEG 0,3
SHMNOACCESS
###################################################################
# Checkpoint and System Block Configuration Parameters
###################################################################
# CKPINTVL - Specifies how often, in seconds, IDS checks
# if a checkpoint is needed. 0 indicates that
# IDS does not check for checkpoints. Ignored
# if RTO_SERVER_RESTART is set.
# AUTO_CKPTS - Enables (1) or disables (0) monitoring of
# critical resource to trigger checkpoints
# more frequently if there is a chance that
# transaction blocking might occur.
# RTO_SERVER_RESTART - Specifies, in seconds, the Recovery Time
# Objective for IDS restart after a server
# failure. Acceptable values are 0 (off) and
# any number from 60-1800, inclusive.
# BLOCKTIMEOUT - Specifies the amount of time, in seconds,
# for a system block.
###################################################################
CKPTINTVL 600
AUTO_CKPTS 1
RTO_SERVER_RESTART 0
BLOCKTIMEOUT 3600
###################################################################
# Transaction-Related Configuration Parameters
###################################################################
# TXTIMEOUT - The distributed transaction timeout, in seconds
# DEADLOCK_TIMEOUT - The maximum time, in seconds, to wait for a
# lock in a distributed transaction.
# HETERO_COMMIT - Enables (1) or disables (0) heterogeneous
# commits for a distributed transaction
# involving an EGM gateway.
###################################################################
TXTIMEOUT 300
DEADLOCK_TIMEOUT 60
HETERO_COMMIT 0
###################################################################
# ontape Tape Device Configuration Parameters
###################################################################
# TAPEDEV - The tape device path for backups. To use standard
# I/O instead of a device, set to stdio.
# TAPEBLK - The tape block size, in KB, for backups
# TAPESIZE - The maximum amount of data to put on one backup
# tape. Acceptable values are 0 (unlimited) or any
# positive integral multiple of TAPEBLK.
###################################################################
TAPEDEV /dev/rmt0
#TAPEDEV /dev/null
TAPEBLK 1024
TAPESIZE 72000000
###################################################################
# ontape Logial Log Tape Device Configuration Parameters
###################################################################
# LTAPEDEV - The tape device path for logical logs
# LTAPEBLK - The tape block size, in KB, for backing up logical
# logs
# LTAPESIZE - The maximum amount of data to put on one logical
# log tape. Acceptable values are 0 (unlimited) or any
# positive integral multiple of LTAPEBLK.
###################################################################
LTAPEDEV /restore/informix/log/ipdb.log
LTAPEBLK 1024
LTAPESIZE 72000000
###################################################################
# Backup and Restore Configuration Parameters
###################################################################
# BAR_ACT_LOG - The ON-Bar activity log file location.
# Do not use the /tmp directory. Use a
# directory with restricted permissions.
# BAR_DEBUG_LOG - The ON-Bar debug log file location.
# Do not use the /tmp directory. Use a
# directory with restricted permissions.
# BAR_DEBUG - The debug level for ON-Bar. Acceptable
# values are 0 (off) through 9 (high).
# BAR_MAX_BACKUP - The number of backup threads used in a
# backup. Acceptable values are 0 (unlimited)
# or any positive integer.
# BAR_RETRY - Specifies the number of time to retry a
# backup or restore operation before reporting
# a failure
# BAR_NB_XPORT_COUNT - Specifies the number of data buffers that
# each onbar_d process uses to communicate
# with the database server
# BAR_XFER_BUF_SIZE - The size, in pages, of each data buffer.
# Acceptable values are 1 through 15 for
# 4 KB pages and 1 through 31 for 2 KB pages.
# RESTARTABLE_RESTORE - Enables ON-Bar to continue a backup after a
# failure. Acceptable values are OFF or ON.
# BAR_PROGRESS_FREQ - Specifies, in minutes, how often progress
# messages are placed in the ON-Bar activity
# log. Acceptable values are: 0 (record only
# completion messages) or 5 and above.
# BAR_BSALIB_PATH - The shared library for ON-Bar and the
# storage manager. The default value is
# $INFORMIXDIR/lib/ibsad001 (with a
# platform-specific file extension).
# BACKUP_FILTER - Specifies the pathname of a filter program
# to transform data during a backup, plus any
# program options
# RESTORE_FILTER - Specifies the pathname of a filter program
# to transform data during a restore, plus any
# program options
# BAR_PERFORMANCE - Specifies the type of performance statistics
# to report to the ON-Bar activity log for backup
# and restore operations.
# Acceptable values are:
# 0 = Turn off performance monitoring (Default)
# 1 = Display the time spent transferring data
# between the IDS instance and the storage
# manager
# 2 = Display timestamps in microseconds
# 3 = Display both timestamps and transfer
# statistics
###################################################################
BAR_ACT_LOG /restore/informix/log/ipdb/bar_act.log
BAR_DEBUG_LOG /restore/informix/log/ipdb/bar_dbug.log
BAR_DEBUG 0
BAR_MAX_BACKUP 0
BAR_RETRY 1
BAR_NB_XPORT_COUNT 20
BAR_XFER_BUF_SIZE 31
RESTARTABLE_RESTORE ON
BAR_PROGRESS_FREQ 0
BAR_BSALIB_PATH
BACKUP_FILTER
RESTORE_FILTER
BAR_PERFORMANCE 0
###################################################################
# Informix Storage Manager (ISM) Configuration Parameters
###################################################################
# ISM_DATA_POOL - Specifies the name for the ISM data pool
# ISM_LOG_POOL - Specifies the name for the ISM log pool
###################################################################
ISM_DATA_POOL ISMData
ISM_LOG_POOL ISMLogs
###################################################################
# Data Dictionary Cache Configuration Parameters
###################################################################
# DD_HASHSIZE - The number of data dictionary pools. Set to any
# positive integer; a prime number is recommended.
# DD_HASHMAX - The number of entries per pool.
# Set to any positive integer.
###################################################################
DD_HASHSIZE 31
DD_HASHMAX 10
###################################################################
# Data Distribution Configuration Parameters
###################################################################
# DS_HASHSIZE - The number of data Ddstribution pools.
# Set to any positive integer; a prime number is
# recommended.
# DS_POOLSIZE - The maximum number of entries in the data
# distribution cache. Set to any positive integer.
###################################################################
DS_HASHSIZE 31
DS_POOLSIZE 127
##################################################################
# User Defined Routine (UDR) Cache Configuration Parameters
##################################################################
# PC_HASHSIZE - The number of UDR pools. Set to any
# positive integer; a prime number is recommended.
# PC_POOLSIZE - The maximum number of entries in the
# UDR cache. Set to any positive integer.
###################################################################
PC_HASHSIZE 31
PC_POOLSIZE 127
###################################################################
# SQL Statement Cache Configuration Parameters
###################################################################
# STMT_CACHE - Controls SQL statement caching. Acceptable
# values are:
# 0 Disabled
# 1 Enabled at the session level
# 2 All statements are cached
# STMT_CACHE_HITS - The number of times an SQL statement must be
# executed before becoming fully cached.
# 0 indicates that all statements are
# fully cached the first time.
# STMT_CACHE_SIZE - The size, in KB, of the SQL statement cache
# STMT_CACHE_NOLIMIT - Controls additional memory consumption.
# Acceptable values are:
# 0 Limit memory to STMT_CACHE_SIZE
# 1 Obtain as much memory, temporarily, as needed
# STMT_CACHE_NUMPOOL - The number of pools for the SQL statement
# cache. Acceptable value is a positive
# integer between 1 and 256, inclusive.
###################################################################
STMT_CACHE 0
STMT_CACHE_HITS 0
STMT_CACHE_SIZE 512
STMT_CACHE_NOLIMIT 0
STMT_CACHE_NUMPOOL 1
###################################################################
# Operating System Session-Related Configuration Parameters
###################################################################
# USEOSTIME - The precision of SQL statement timing.
# Accepted values are 0 (precision to seconds)
# and 1 (precision to subseconds). Subsecond
# precision can degrade performance.
# STACKSIZE - The size, in KB, for a session stack
# ALLOW_NEWLINE - Controls whether embedded new line characters
# in string literals are allowed in SQL
# statements. Acceptable values are 1 (allowed)
# and any number other than 1 (not allowed).
# USELASTCOMMITTED - Controls the committed read isolation level.
# Acceptable values are:
# - NONE Waits on a lock
# - DIRTY READ Uses the last committed value in
# place of a dirty read
# - COMMITTED READ Uses the last committed value
# in place of a committed read
# - ALL Uses the last committed value in place
# of all isolation levels that support the last
# committed option
###################################################################
USEOSTIME 0
STACKSIZE 64
ALLOW_NEWLINE 0
USELASTCOMMITTED NONE
###################################################################
# Index Related Configuration Parameters
###################################################################
# FILLFACTOR - The percentage of index page fullness
# MAX_FILL_DATA_PAGES - Enables (1) or disables (0) filling data
# pages that have variable length rows as
# full as possible
# BTSCANNER - Specifies the configuration settings for all
# btscanner threads. The format is:
# BTSCANNER num=<#>,threshold=<#>,rangesize=<#>,
# alice=(0-12),compression=[low|med|high|default]
# ONLIDX_MAXMEM - The amount of memory, in KB, allocated for
# the pre-image pool and updator log pool for
# each partition.
###################################################################
FILLFACTOR 90
MAX_FILL_DATA_PAGES 0
#BTSCANNER num=1,threshold=5000,rangesize=-1,alice=6,compression=default
BTSCANNER num=2,threshold=500,rangesize=-1,alice=6,compression=default
ONLIDX_MAXMEM 5120
###################################################################
# Parallel Database Query (PDQ) Configuration Parameters
###################################################################
# MAX_PDQPRIORITY - The maximum amount of resources, as a
# percentage, that PDQ can allocate to any
# one decision support query
# DS_MAX_QUERIES - The maximum number of concurrent decision
# support queries
# DS_TOTAL_MEMORY - The maximum amount, in KB, of decision
# support query memory
# DS_MAX_SCANS - The maximum number of concurrent decision
# support scans
# DS_NONPDQ_QUERY_MEM - The amount of non-PDQ query memory, in KB.
# Acceptable values are 128 to 25% of
# DS_TOTAL_MEMORY.
# DATASKIP - Specifies whether to skip dbspaces when
# processing a query. Acceptable values are:
# - ALL Skip all unavailable fragments
# - ON <dbspace1> <dbspace2>... Skip listed
# dbspaces
# - OFF Do not skip dbspaces (default)
###################################################################
MAX_PDQPRIORITY 100
DS_MAX_QUERIES
DS_TOTAL_MEMORY
DS_MAX_SCANS 1048576
DS_NONPDQ_QUERY_MEM 128
DATASKIP
###################################################################
# Optimizer Configuration Parameters
###################################################################
# OPTCOMPIND - Controls how the optimizer determines the best
# query path. Acceptable values are:
# 0 Nested loop joins are preferred
# 1 If isolation level is repeatable read,
# works the same as 0, otherwise works same as 2
# 2 Optimizer decisions are based on cost only
# DIRECTIVES - Specifies whether optimizer directives are
# enabled (1) or disabled (0). Default is 1.
# EXT_DIRECTIVES - Controls the use of external SQL directives.
# Acceptable values are:
# 0 Disabled
# 1 Enabled if the IFX_EXTDIRECTIVES environment
# variable is enabled
# 2 Enabled even if the IFX_EXTDIRECTIVES
# environment is not set
# OPT_GOAL - Controls how the optimizer should optimize for
# fastest retrieval. Acceptable values are:
# -1 All rows in a query
# 0 The first rows in a query
# IFX_FOLDVIEW - Enables (1) or disables (0) folding views that
# have multiple tables or a UNION ALL clause.
# Disabled by default.
# AUTO_REPREPARE - Enables (1) or disables (0) automatically
# re-optimizing stored procedures and re-preparing
# prepared statements when tables that are referenced
# by them change. Minimizes the occurrence of the
# -710 error.
####################################################################
OPTCOMPIND 2
DIRECTIVES 1
EXT_DIRECTIVES 0
OPT_GOAL -1
IFX_FOLDVIEW 0
AUTO_REPREPARE 1
###################################################################
# Read-ahead Configuration Parameters
###################################################################
#RA_PAGES - The number of pages, as a positive integer, to
# attempt to read ahead
#RA_THRESHOLD - The number of pages, as a postive integer, left
# before the next read-ahead group
###################################################################
RA_PAGES 64
RA_THRESHOLD 16
###################################################################
# SQL Tracing and EXPLAIN Plan Configuration Parameters
###################################################################
# EXPLAIN_STAT - Enables (1) or disables (0) including the Query
# Statistics section in the EXPLAIN output file
# SQLTRACE - Configures SQL tracing. The format is:
# SQLTRACE level=(low|med|high),ntraces=<#>,size=<#>,
# mode=(global|user)
###################################################################
EXPLAIN_STAT 0
#SQLTRACE level=low,ntraces=1000,size=2,mode=global
###################################################################
# Security Configuration Parameters
###################################################################
# DBCREATE_PERMISSION - Specifies the users who can create
# databases (by default, any user can).
# Add a DBCREATE_PERMISSION entry
# for each user who needs database
# creation privileges. Ensure user
# informix is authorized when you
# first initialize IDS.
# DB_LIBRARY_PATH - Specifies the locations, separated
# by commas, from which IDS can use
# UDR or UDT shared libraries. If set,
# make sure that all directories containing
# the blade modules are listed, to
# ensure all DataBlade modules will
# work.
# IFX_EXTEND_ROLE - Controls whether administrators
# can use the EXTEND role to specify
# which users can register external
# routines. Acceptable values are:
# 0 Any user can register external
# routines
# 1 Only users granted the ability
# to register external routines
# can do so (Default)
# SECURITY_LOCALCONNECTION - Specifies whether IDS performs
# security checking for local
# connections. Acceptable values are:
# 0 Off
# 1 Validate ID
# 2 Validate ID and port
# UNSECURE_ONSTAT - Controls whether non-DBSA users are
# allowed to run all onstat commands.
# Acceptable values are:
# 1 Enabled
# 0 Disabled (Default)
# ADMIN_USER_MODE_WITH_DBSA - Controls who can connect to IDS
# in administration mode. Acceptable
# values are:
# 1 DBSAs, users specified by
# ADMIN_MODE_USERS, and the user
# informix
# 0 Only the user informix (Default)
# ADMIN_MODE_USERS - Specifies the user names, separated by
# commas, who can connect to IDS in
# administration mode, in addition to
# the user informix
# SSL_KEYSTORE_LABEL - The label, up to 512 characters, of
# the IDS certificate used in Secure
# Sockets Layer (SSL) protocol
# communications.
###################################################################
DBCREATE_PERMISSION informix
#DB_LIBRARY_PATH
IFX_EXTEND_ROLE 1
SECURITY_LOCALCONNECTION
UNSECURE_ONSTAT 1
ADMIN_USER_MODE_WITH_DBSA
ADMIN_MODE_USERS
SSL_KEYSTORE_LABEL
###################################################################
# LBAC Configuration Parameters
###################################################################
# PLCY_POOLSIZE - The maximum number of entries in each hash
# bucket of the LBAC security information cache
# PLCY_HASHSIZE - The number of hash buckets in the LBAC security
# information cache
# USRC_POOLSIZE - The maximum number of entries in each hash
# bucket of the LBAC credential memory cache
# USRC_HASHSIZE - The number of hash buckets in the LBAC credential
# memory cache
###################################################################
PLCY_POOLSIZE 127
PLCY_HASHSIZE 31
USRC_POOLSIZE 127
USRC_HASHSIZE 31
###################################################################
# Optical Configuration Parameters
###################################################################
# STAGEBLOB - The name of the optical blobspace. Must be set to
# use the optical-storage subsystem.
# OPCACHEMAX - Maximum optical cache size, in KB
###################################################################
STAGEBLOB
OPCACHEMAX 0
###################################################################
# High Availability and Enterprise Replication Security
# Configuration Parameters
###################################################################
# ENCRYPT_HDR - Enables (1) or disables (0) encryption for HDR.
# ENCRYPT_SMX - Controls the level of encryption for RSS and
# SDS servers. Acceptable values are:
# 0 Do not encrypt (Default)
# 1 Encrypt if possible
# 2 Always encrypt
# ENCRYPT_CDR - Controls the level of encryption for ER.
# Acceptable values are:
# 0 Do not encrypt (Default)
# 1 Encrypt if possible
# 2 Always encrypt
# ENCRYPT_CIPHERS - A list of encryption ciphers and modes,
# separated by commas. Default is all.
# ENCRYPT_MAC - Controls the level of message authentication
# code (MAC). Acceptable values are off, high,
# medium, and low. List multiple values separated
# by commas; the highest common level between
# servers is used.
# ENCRYPT_MACFILE - The paths of the MAC key files, separated
# by commas. Use the builtin keyword to specify
# the built-in key. Default is builtin.
# ENCRYPT_SWITCH - Defines the frequencies, in minutes, at which
# ciphers and keys are renegotiated. Format is:
# <cipher_switch_time>,<key_switch_time>
# Default is 60,60.
###################################################################
ENCRYPT_HDR
ENCRYPT_SMX
ENCRYPT_CDR 0
ENCRYPT_CIPHERS
ENCRYPT_MAC
ENCRYPT_MACFILE
ENCRYPT_SWITCH
###################################################################
# Enterprise Replication (ER) Configuration Parameters
###################################################################
# CDR_EVALTHREADS - The number of evaluator threads per
# CPU VP and the number of additional
# threads, separated by a comma.
# Acceptable values are: a non-zero value
# followed by a non-negative value
# CDR_DSLOCKWAIT - The number of seconds the Datasync
# waits for database locks.
# CDR_QUEUEMEM - The maximum amount of memory, in KB,
# for the send and receive queues.
# CDR_NIFCOMPRESS - Controls the network interface
# compression level.
# Acceptable values are:
# -1 Never
# 0 None
# 1-9 Compression level
# CDR_SERIAL - Specifies the incremental size and
# the starting value of replicated
# serial columns. The format is:
# <delta>,<offset>
# CDR_DBSPACE - The dbspace name for the syscdr
# database.
# CDR_QHDR_DBSPACE - The name of the transaction record
# dbspace. Default is the root dbspace.
# CDR_QDATA_SBSPACE - The names of sbspaces for spooled
# transaction data, separated by commas.
# CDR_MAX_DYNAMIC_LOGS - The maximum number of dynamic log
# requests that ER can make within one
# server session. Acceptable values are:
# -1 (unlimited), 0 (disabled),
# 1 through n (limit to n requests)
# CDR_SUPPRESS_ATSRISWARN - The Datasync error and warning code
# numbers to be suppressed in ATS and RIS
# files. Acceptable values are: numbers
# or ranges of numbers separated by commas.
# Separate numbers in a range by a hyphen.
###################################################################
CDR_EVALTHREADS 1,2
CDR_DSLOCKWAIT 5
CDR_QUEUEMEM 4096
CDR_NIFCOMPRESS 0
CDR_SERIAL 0
CDR_DBSPACE
CDR_QHDR_DBSPACE
CDR_QDATA_SBSPACE
CDR_MAX_DYNAMIC_LOGS 0
CDR_SUPPRESS_ATSRISWARN
###################################################################
# High Availability Cluster (HDR, SDS, and RSS)
# Configuration Parameters
###################################################################
# DRAUTO - Controls automatic failover of primary
# servers. Valid for HDR, SDS, and RSS.
# Acceptable values are:
# 0 Manual
# 1 Retain server type
# 2 Reverse server type
# 3 Connection Manager Arbitrator controls
# server type
# DRINTERVAL - The maximum interval, in seconds, between HDR
# buffer flushes. Valid for HDR only.
# DRTIMEOUT - The time, in seconds, before a network
# timeout occurs. Valid for HDR only.
# DRLOSTFOUND - The path of the HDR lost-and-found file.
# Valid of HDR only.
# DRIDXAUTO - Enables (1) or disables (0) automatic index
# repair for an HDR pair. Default is 0.
# HA_ALIAS - The server alias for a high-availability
# cluster. Must be the same as a value of
# DBSERVERNAME or DBSERVERALIASES that uses a
# network-based connection type. Valid for HDR,
# SDS, and RSS.
# LOG_INDEX_BUILDS - Enable (1) or disable (0) index page logging.
# Required for RSS. Optional for HDR and SDS.
# SDS_ENABLE - Enables (1) or disables (0) an SDS server.
# Set this value on an SDS server after setting
# up the primary. Valid for SDS only.
# SDS_TIMEOUT - The time, in seconds, that the primary waits
# for an acknowledgement from an SDS server
# while performing page flushing before marking
# the SDS server as down. Valid for SDS only.
# SDS_TEMPDBS - The temporary dbspace used by an SDS server.
# The format is:
# <dbspace_name>,<path>,<pagesize in KB>,<offset in KB>,
# <size in KB>
# You can include up to 16 entries of SDS_TEMPDBS to
# specify additional dbspaces. Valid for SDS.
# SDS_PAGING - The paths of two buffer paging files,
# Separated by a comma. Valid for SDS only.
# UPDATABLE_SECONDARY - Controls whether secondary servers can accept
# update, insert, and delete operations from clients.
# If enabled, specifies the number of connection
# threads between the secondary and primary servers
# for transmitting updates from the secondary.
# Acceptable values are:
# 0 Secondary server is read-only (default)
# 1 through twice the number of CPU VPs, threads
# for performing updates from the secondary.
# Valid for HDR, SDS, and RSS.
# FAILOVER_CALLBACK - Specifies the path and program name called when a
# secondary server transitions to a standard or
# primary server. Valid for HDR, SDS, and RSS.
# TEMPTAB_NOLOG - Controls the default logging mode for temporary
# tables that are explicitly created with the
# CREATE TEMP TABLE or SELECT INTO TEMP statements.
# Secondary servers must not have logged temporary
# tables. Acceptable values are:
# 0 Create temporary tables with logging enabled by
# default.
# 1 Create temporary tables without logging.
# Required to be set to 1 on HDR, RSS, and SDS
# secondary servers.
###################################################################
DRAUTO 0
DRINTERVAL 30
DRTIMEOUT 30
HA_ALIAS
DRLOSTFOUND /restore/informix/log/ipdb/dr.lostfound
DRIDXAUTO 0
LOG_INDEX_BUILDS
SDS_ENABLE
SDS_TIMEOUT 20
SDS_TEMPDBS
SDS_PAGING
UPDATABLE_SECONDARY 0
FAILOVER_CALLBACK
TEMPTAB_NOLOG 0
###################################################################
# Logical Recovery Parameters
###################################################################
# ON_RECVRY_THREADS - The number of logical recovery threads that
# run in parallel during a warm restore.
# OFF_RECVRY_THREADS - The number of logical recovery threads used
# in a cold restore. Also, the number of
# threads used during fast recovery.
###################################################################
ON_RECVRY_THREADS 1
OFF_RECVRY_THREADS 10
###################################################################
# Diagnostic Dump Configuration Parameters
###################################################################
# DUMPDIR - The location Assertion Failure (AF) diagnostic
# files
# DUMPSHMEM - Controls shared memory dumps. Acceptable values
# are:
# 0 Disabled
# 1 Dump all shared memory
# 2 Exclude the buffer pool from the dump
# DUMPGCORE - Enables (1) or disables (0) whether IDS dumps a
# core using gcore
# DUMPCORE - Enables (1) or disables (0) whether IDS dumps a
# core after an AF
# DUMPCNT - The maximum number of shared memory dumps or
# core files for a single session
###################################################################
DUMPDIR /restore/inf/dump/ipdb
DUMPSHMEM 1
DUMPGCORE 0
DUMPCORE 0
DUMPCNT 1
###################################################################
# Alarm Program Configuration Parameters
###################################################################
# ALARMPROGRAM - Specifies the alarm program to display event
# alarms. To enable automatic logical log backup,
# edit alarmprogram.sh and set BACKUPLOGS=Y.
# ALRM_ALL_EVENTS - Controls whether the alarm program runs for
# every event. Acceptable values are:
# 1 Logs only noteworthy events
# 2 Logs all events
# STORAGE_FULL_ALARM - <time interval in seconds>,<alarm severity>
# specifies in what interval:
# - a message will be printed to the online.log file
# - an alarm will be raised
# when
# - a dbspace becomes full
# (ISAM error -131)
# - a partition runs out of pages or extents
# (ISAM error -136)
# time interval = 0 : OFF
# severity = 0 : no alarm, only message
# SYSALARMPROGRAM - Specifies the system alarm program triggered
# when an AF occurs
###################################################################
ALARMPROGRAM $INFORMIXDIR/etc/alarmprogram.sh
#ALRM_ALL_EVENTS 0
ALRM_ALL_EVENTS 1
STORAGE_FULL_ALARM 600,3
SYSALARMPROGRAM $INFORMIXDIR/etc/evidence.sh
###################################################################
# RAS Configuration Parameters
###################################################################
# RAS_PLOG_SPEED - Technical Support diagnostic parameter.
# Do not change; automatically updated.
# RAS_LLOG_SPEED - Technical Support diagnostic parameter.
# Do not change; automatically updated.
###################################################################
RAS_PLOG_SPEED 0
RAS_LLOG_SPEED 0
###################################################################
# Character Processing Configuration Parameter
###################################################################
# EILSEQ_COMPAT_MODE - Controls whether when processing characters,
# IDS checks if the characters are valid for
# the locale and returns error -202 if they are
# not. Acceptable values are:
# 0 Return an error for characters that are not
# valid (Default)
# 1 Allow characters that are not valid
####################################################################
EILSEQ_COMPAT_MODE 0
###################################################################
# Statistic Configuration Parameters
###################################################################
# QSTATS - Enables (1) or disables (0) the collection of queue
# statistics that can be viewed with onstat -g qst
# WSTATS - Enables (1) or disables (0) the collection of wait
# statistics that can be viewed with onstat -g wst
####################################################################
QSTATS 0
WSTATS 0
###################################################################
# Java Configuration Parameters
###################################################################
# VPCLASS jvp - Configures the Java VP. The format is:
# VPCLASS jvp,num=<#>[,max=<#>][,aff=<#>][,noage]
# JVPJAVAHOME - The JRE root directory
# JVPHOME - The Krakatoa installation directory
# JVPPROPFILE - The Java VP property file
# JVPLOGFILE - The Java VP log file
# JDKVERSION - The version of JDK supported by this server
# JVPJAVALIB - The location of the JRE libraries, relative
# to JVPJAVAHOME
# JVPJAVAVM - The JRE libraries to use for the Java VM
# JVPARGS - Configures the Java VM. To display JNI calls,
# use JVPARGS -verbose:jni. Separate options with
# semicolons.
# JVPCLASSPATH - The Java classpath to use. Use krakatoa_g.jar
# for debugging. Comment out the JVPCLASSPATH
# entry you do not want to use.
###################################################################
#VPCLASS jvp,num=1
JVPJAVAHOME $INFORMIXDIR/extend/krakatoa/jre
JVPHOME $INFORMIXDIR/extend/krakatoa
JVPPROPFILE $INFORMIXDIR/extend/krakatoa/.jvpprops
JVPLOGFILE $INFORMIXDIR/jvp.log
JDKVERSION 1.5
JVPJAVALIB /bin
JVPJAVAVM jvm
#JVPARGS -verbose:jni
#JVPCLASSPATH
$INFORMIXDIR/extend/krakatoa/krakatoa_g.jar:$INFORMIXDIR/extend/krakatoa/jdbc_g.jar
JVPCLASSPATH
$INFORMIXDIR/extend/krakatoa/krakatoa.jar:$INFORMIXDIR/extend/krakatoa/jdbc.jar
###################################################################
# Buffer pool and LRU Configuration Parameters
###################################################################
# BUFFERPOOL - Specifies the default values for buffers and LRU
# queues in each buffer pool. Each page size used
# by a dbspace has a buffer pool and needs a
# BUFFERPOOL entry. The onconfig.std file contains
# two initial entries: a default entry from which
# to base new page size entries on, and an entry
# for the operating system default page size.
# When you add a dbspace with a different page size,
# IDS adds a BUFFERPOOL entry to the onconfig file
# with values that are the same as the default
# BUFFERPOOL entry, except that the default
# keyword is replaced by size=Nk, where N is the
# new page size. With interval checkpoints, these
# values can now be set higher than in previous
# versions of IDS in an OLTP environment.
# AUTO_LRU_TUNING - Enables (1) or disables (0) automatic tuning of
# LRU queues. When this parameter is enabled, IDS
# increases the LRU flushing if it cannot find low
# priority buffers for number page faults.
###################################################################
BUFFERPOOL
default,buffers=10000,lrus=8,lru_min_dirty=50.000000,lru_max_dirty=60.500000
BUFFERPOOL
size=4K,buffers=300000,lrus=16,lru_min_dirty=50.000000,lru_max_dirty=60.000000
/etc/services
DB2_db2inst2_1 60005/tcp
DB2_db2inst2_2 60006/tcp
DB2_db2inst2_END 60007/tcp
db2c_db2inst2 50001/tcp #db2 connection port
CMIC 8081/tcp
ipsec_sk_engine_s 4001/udp
vert_serv 50055/tcp
#cllockd 6100/udp
#clm_mig_lkm 6151/tcp
DB2_db2rins1 60008/tcp
DB2_db2rins1_1 60009/tcp
DB2_db2rins1_2 60010/tcp
DB2_db2rins1_END 60011/tcp
ipdbsvc 6800/tcp #New ipdb database instance
7. Restore
Firstly, create(touch) all the chunk files listed in file:renchunk, this file is used to mv all the chunk files
on ifx01 under new directory: /restore; all the chunk file with mode 660, and informix:informix
ownership.
$ ontape -r -rename -f renchunk -t ../idsbkup/ipdb_level_0
Please mount tape 1 on ../idsbkup/ipdb_level_0 and press Return to continue ...
Archive Tape Information
Tape type: Archive Backup Tape
Online version: IBM Informix Dynamic Server Version 11.50.UC3W2
Archive date: Mon Jun 17 11:43:59 2013
User id: informix
Terminal id: /dev/pts/1
Archive level: 0
Tape device: /insight_db_dr/ipdb_level_0
Tape blocksize (in k): 1024
Tape size (in k): 72000000
Tape number in series: 1
Spaces to restore:
1 [rootdbs]
2 [llogdbs]
3 [plogdbs]
4 [datadbs1]
5 [datadbs2]
6 [indxdbs1]
7 [indxdbs2]
8 [datadbs3]
9 [indxdbs3]
Archive Information
Informix Dynamic Server Copyright(C) 1986-1998 Informix Software, Inc.
Initialization Time 09/17/2002 09:42:59
System Page Size 4096
Version 16
Index Page Logging OFF
Archive CheckPoint Time 06/17/2013 11:43:58
Dbspaces
number flags fchunk nchunks flags owner name
1 1 1 1 N informix rootdbs
2 1 2 1 N informix llogdbs
3 1 3 1 N informix plogdbs
4 1 4 23 N informix datadbs1
5 1 19 25 N informix datadbs2
6 1 34 7 N informix indxdbs1
7 1 37 5 N informix indxdbs2
8 2001 40 1 N T informix tempdbs1
9 2001 41 1 N T informix tempdbs2
10 2001 42 1 N T informix tempdbs3
11 1 59 20 N informix datadbs3
12 1 75 4 N informix indxdbs3
13 2001 78 1 N T informix tempdbs4
Chunks
chk/dbs offset size free bpages flags pathname
1 1 0 55000 12893PO-- /ix_root/ix_root.1
2 2 0 250000 69947PO-- /ix_llog/ix_llog.1
3 3 0 64000 1447 PO-- /ix_plog/ix_plog.1
4 4 0 250000 7 PO-- /ix_dat1/ix_dat1.1
5 4 0 250000 1236 PO-- /ix_dat1/ix_dat1.2
6 4 0 250000 400 PO-- /ix_dat1/ix_dat1.3
7 4 0 250000 2312 PO-- /ix_dat1/ix_dat1.4
8 4 0 250000 0 PO-- /ix_dat1/ix_dat1.5
9 4 0 250000 0 PO-- /ix_dat1/ix_dat1.6
10 4 0 250000 0 PO-- /ix_dat1/ix_dat1.7
11 4 0 250000 0 PO-- /ix_dat1/ix_dat1.8
12 4 0 250000 323 PO-- /ix_dat1/ix_dat1.9
13 4 0 250000 1800 PO-- /ix_dat1/ix_dat1.10
14 4 0 250000 72 PO-- /ix_dat1/ix_dat1.11
15 4 0 250000 688 PO-- /ix_dat1/ix_dat1.12
16 4 0 250000 1208 PO-- /ix_dat1/ix_dat1.13
17 4 0 250000 0 PO-- /ix_dat1/ix_dat1.14
18 4 0 250000 0 PO-- /ix_dat1/ix_dat1.15
19 5 0 250000 1 PO-- /ix_dat2/ix_dat2.1
20 5 0 250000 941 PO-- /ix_dat2/ix_dat2.2
21 5 0 250000 5 PO-- /ix_dat2/ix_dat2.3
22 5 0 250000 5 PO-- /ix_dat2/ix_dat2.4
23 5 0 250000 5 PO-- /ix_dat2/ix_dat2.5
24 5 0 250000 133 PO-- /ix_dat2/ix_dat2.6
25 5 0 250000 133 PO-- /ix_dat2/ix_dat2.7
26 5 0 250000 133 PO-- /ix_dat2/ix_dat2.8
27 5 0 250000 645 PO-- /ix_dat2/ix_dat2.9
28 5 0 250000 5 PO-- /ix_dat2/ix_dat2.10
29 5 0 250000 40197PO-- /ix_dat2/ix_dat2.11
30 5 0 250000 67589PO-- /ix_dat2/ix_dat2.12
31 5 0 250000 9221 PO-- /ix_dat2/ix_dat2.13
32 5 0 250000 92165PO-- /ix_dat2/ix_dat2.14
33 5 0 250000 245765 PO-- /ix_dat2/ix_dat2.15
34 6 0 250000 1 PO-- /ix_idx1/ix_idx1.1
35 6 0 250000 6 PO-- /ix_idx1/ix_idx1.2
36 6 0 250000 3 PO-- /ix_idx1/ix_idx1.3
37 7 0 250000 3 PO-- /ix_idx2/ix_idx2.1
38 7 0 250000 1 PO-- /ix_idx2/ix_idx2.2
39 7 0 250000 5 PO-- /ix_idx2/ix_idx2.3
40 8 0 250000 242365 PO-- /ix_temp/ix_temp.1
41 9 0 250000 242380 PO-- /ix_temp/ix_temp.2
42 10 0 250000 242365 PO-- /ix_temp/ix_temp.3
43 7 0 250000 76621PO-- /ix_idx2/ix_idx2.4
44 5 0 250000 243717 PO-- /ix_dat2/ix_dat2.16
45 6 0 250000 13 PO-- /ix_idx1/ix_idx1.4
46 5 0 250000 249949 PO-- /ix_dat2/ix_dat2.17
47 5 0 250000 229517 PO-- /ix_dat2/ix_dat2.18
48 6 0 250000 228413 PO-- /ix_idx1/ix_idx1.5
49 7 0 250000 249997 PO-- /ix_idx2/ix_idx2.5
50 5 0 250000 233613 PO-- /ix_dat2/ix_dat2.19
51 6 0 250000 249997 PO-- /ix_idx1/ix_idx1.6
52 5 0 250000 249997 PO-- /ix_dat2/ix_dat2.20
53 5 0 250000 247437 PO-- /ix_dat2/ix_dat2.21
54 5 0 250000 249997 PO-- /ix_dat2/ix_dat2.22
55 5 0 250000 249997 PO-- /ix_dat2/ix_dat2.23
56 6 0 250000 249997 PO-- /ix_idx1/ix_idx1.7
57 5 0 250000 249997 PO-- /ix_dat2/ix_dat2.24
58 5 0 250000 217229 PO-- /ix_dat2/ix_dat2.25
59 11 0 250000 3 PO-- /ix_dat3/ix_dat3.1
60 11 0 250000 1 PO-- /ix_dat3/ix_dat3.2
61 11 0 250000 1 PO-- /ix_dat3/ix_dat3.3
62 11 0 250000 1 PO-- /ix_dat3/ix_dat3.4
63 11 0 250000 1 PO-- /ix_dat3/ix_dat3.5
64 11 0 250000 1 PO-- /ix_dat3/ix_dat3.6
65 11 0 250000 1 PO-- /ix_dat3/ix_dat3.7
66 11 0 250000 1 PO-- /ix_dat3/ix_dat3.8
67 11 0 250000 1 PO-- /ix_dat3/ix_dat3.9
68 11 0 250000 1 PO-- /ix_dat3/ix_dat3.10
69 11 0 250000 1 PO-- /ix_dat3/ix_dat3.11
70 11 0 250000 5 PO-- /ix_dat3/ix_dat3.12
71 11 0 250000 5 PO-- /ix_dat3/ix_dat3.13
72 11 0 250000 5 PO-- /ix_dat3/ix_dat3.14
73 11 0 250000 5 PO-- /ix_dat3/ix_dat3.15
74 11 0 250000 5 PO-- /ix_dat3/ix_dat3.16
75 12 0 250000 6 PO-- /ix_idx3/ix_idx3.1
76 12 0 250000 5 PO-- /ix_idx3/ix_idx3.2
77 12 0 250000 156865 PO-- /ix_idx3/ix_idx3.3
78 13 0 250000 242397 PO-- /ix_temp/ix_temp.4
79 11 0 250000 5 PO-- /ix_dat3/ix_dat3.17
80 11 0 250000 35813PO-- /ix_dat3/ix_dat3.18
81 4 0 250000 0 PO-- /ix_dat1/ix_dat1.16
82 4 0 250000 3784 PO-- /ix_dat1/ix_dat1.17
83 4 0 250000 0 PO-- /ix_dat1/ix_dat1.18
84 11 0 250000 249997 PO-- /ix_dat3/ix_dat3.19
85 12 0 250000 249997 PO-- /ix_idx3/ix_idx3.4
86 4 0 250000 0 PO-- /ix_dat1/ix_dat1.19
87 4 0 250000 0 PO-- /ix_dat1/ix_dat1.20
88 4 0 250000 0 PO-- /ix_dat1/ix_dat1.21
89 4 0 250000 232077 PO-- /ix_dat1/ix_dat1.22
90 11 0 250000 249997 PO-- /ix_dat3/ix_dat3.20
91 4 0 250000 249997 PO-- /ix_dat1/ix_dat1.23
Continue restore? (y/n)y
Do you want to back up the logs? (y/n)n
Please mount tape 1 on /restore/informix/log/ipdb.log and press Return to continue
...
Log files are corrupt.
Could not salvage logs, continue restore? (y/n)y
WARNING : If you intend to use J/Foundation or GLS for Unicode feature(GLU) with
this Server instance, please make sure that your SHMBASE value specifies in onc
onfig is 0x40000000L or above. Otherwise you will have problems while attaching
or dynamically adding virtual shared memory segments. Please refer to Server mac
hine notes for more information.
Restore a level 1 archive (y/n) y
Ready for level 1 tape
Please mount tape 1 on ../idsbkup/ipdb_level_0 and press Return to continue ...
Archive Tape Information
Tape type: Archive Backup Tape
Online version: IBM Informix Dynamic Server Version 11.50.UC3W2
Archive date: Mon Jun 17 11:43:59 2013
User id: informix
Terminal id: /dev/pts/1
Archive level: 0
Tape device: /insight_db_dr/ipdb_level_0
Tape blocksize (in k): 1024
Tape size (in k): 72000000
Tape number in series: 1
invalid archive level
Restore a level 1 archive (y/n) n
Do you want to restore log tapes? (y/n)y
Roll forward should start with log number 704801
Please mount tape 1 on /restore/informix/log/ipdb.log and press Return to contin
ue ...
Unexpected end of log tape (errno 0), continuing...
Do you want to restore another log tape? (y/n)n
Program over.
$ hostname
cm07
$ onstat -
IBM Informix Dynamic Server Version 11.50.UC3W2 -- Quiescent (CKPT INP) -- Up
08:02:28 -- 2051200 Kbytes
$ onmode -m
Ensure that the version of IBM® Informix® is identical on both the primary and secondary systems.
Use continuous log restore to restart a log restore with newly available logs after all currently available logs have been
restored. For more information, see Continuous log restore.
To configure continuous log restore with ontape:
1. On the primary system, perform a level-0 archive with the ontape-s -L 0 command.
2. On the secondary system, copy the files or mount the tape (as assigned by LTAPEDEV) and perform a
physical restore with the ontape-p command.
3. Respond to the following prompts:
4. Continue restore? Y
5. Do you want to back up the logs? N
Restore a level 1 archive? N
After the physical restore completes, the database instance waits in fast recovery mode to restore logical logs.
6. On the primary system, back up logical logs with the ontape -a command.
7. On the secondary system, copy the files or mount the tape that contains the backed up logical logs from the
primary system. Perform a logical log restore with the ontape -l-C command.
8. Repeat steps 4 and 5 for all logical logs that are available to back up and restore.
9. If you are doing continuous log restore on a secondary system as an emergency standby, run the following
commands to complete restoring logical logs and quiesce the server
o If logical logs are available to restore, use the ontape-l command.
o After all available logical logs are restored, use the ontape-l -X command.
TIPS: Database Backup image is shared by NFS, To speed up network data transaction, we set
cm07 and ifx01 network in the same subnet:
# ifconfig en1 192.168.108.122 alias
To verify the network connection and ip packages route:
# traceroute ifx01
If you find this setting doesn’t work, using #ifconfig en1 192.168.108.122 dele
Call IBM 1-800-426-7378 to open a hardware ticket
1. Inform the Data Center location to IBM
Livingston International Inc.
135B Matheson Blvd. West
Mississauga, On
Postcode: L5R 3E9
Telephone: 905-890-3210 x29
2. Get System Information:
# prtconf
Machine Mode: 9117-570
Machine S/N is 10-A32FD
# snap r
#snap a
#snap c
or simply #snap -gktc
3. Send file /tmp/ibmsupt/snap.pax.Z to IBM
Figure 1: Software components and Data Process
Queue 10
Queue 21
Queue 22
Queue 31
Queue 32
Queue 34
Queue 41
Queue 46
Queue 51
Queue 52
Queue 70
Queue 71
Queue 81
Data
Files
Token
Files
Tuxedo
Client
Reader
program
(Tcl)
runner
Informix Server
ip_0p@ipdb
Tuxedo Server
(Services)
Data
Preparation
Data
loading
Data
Process
Update
Databas
e
Data
Store
Figure 2: Software architecture and procedure
Software/Process
Data Preparation
Data loading
Data process
Update Database
Data Store
FTP (VMS to ifx01)
RCP Data files and Token
files from VMS to ifx01
/dmqjtmp/rcp and
/dmqjtmp/dmqvax/token
Runner
Starts Tuxedo Client Reader
program (tcl) to process Data files
queue by queue and one by one,
based on Token files status and
sequence
Moves data and token files to
different directories
/dmqjtmp/rcp/done and
/dmqjtmp/dmqvax/tokendone
when data start process.
Keeps three days data files from
VMS on ifx01
Tuxedo Client
Reader (tcl)
Reads DECmessageQ
records from Data files
Calls Tuxedo services on
Tuxedo Server to process
these records
Tuxedo Server
(locsrv)
Starts and oversees a
set of services
Updates the Informix
database using
Tuxedo services
Informix Server
(ip_0p@ipdb)
Stores records in
database tables,
provide data for
Insight Web
Application
RUNNER.KSH
Opening
/dmqjtmp/rcp
/*.vax
Any Token file for
current Queue
END of All Queues
Reader Queue:
10,21,22,31,32,34,41,51,52,70,71,81
loops Start:
Handling following Queue in sequence
10,21,22,31,32,34,41,51,52,70,71,81
Set the last token file received for
current Queue
Prepare data files in current Queue
Reader for current
Queue is Available?
Move the Cur data/token file in current
Queue to work directory
/dmqjtmp/
dmqvax/tok
en/*.vax
Start actual Reader for the data file in
current Queue to process
Reader is available: Log the end of work,
set Cur data file, Last Ptr data file
Queue Loops
reach end?
/dmqjtmp/rcp
/done/*.vax
/dmqjtmp/
dmqvax/tok
en/tokendo
na/*.vax
Yes
Yes
Yes
No, then next queue
No, then next queue
No, then next queue
This is a kshell script named
‘runner.ksh’ that runs a TUXEDO
Reader program named ‘tcl’ to
load data files into Informix
database.
Data files are from LOCUS system
grouped in 11 different Queues,
maps to different tables in Informix
database. Data files are stored in
/dmqjtmp/rcp/*.vax, and moved to
/dmqjtmp/rcp/done/*vax for
TUXEDO Reader to process if the
Reader is available (only one
Reader can be run for one queue at
the same time)
Practice Two: Setup test informix database on Redhat Linux 5.8 with VMWARE
Add disk space
Type the following to send a rescan request:
# echo "- - -" > /sys/class/scsi_host/host0/scan
# fdisk -l
You will find the new added disk.
Partition this new disk
# fdisk /dev/sdb
Format new created partition
# mkfs.ext3 /dev/sdb1
on Redhat Linux, Edit /etc/mtab and /etc/fstab to add this new formatted partition to filesystems
Share directory on Windows
Install Samba Server on Linux system is a best practice to share a directory between Windows & Linux systems, so you
can copy the download software packages from Windows to Linux very conveniently.
To set up a shared folder on Windows for Linux to access, start by making sure your network settings are configured to
allow the connection from the other computer by opening the Network and Sharing Center.
In the Network and Sharing Center window, click on “Change advanced sharing settings.”
For your current profile, adjust the following two settings:
- Turn on network discovery
- Turn on file and printer sharing
Click on “Save Changes” after those settings are configured. Now we can create a place on the Windows computer for the Linux machine to
see files and copy contents to. There are no limitations to what you can share out (you could theoretically share your entire hard drive), but
we will just be sharing out a folder called “Share” located on our Desktop.
Right click on the folder you’d like to share out over the network, and click Properties. Go to the Sharing tab and click Advanced Sharing.
Check the “Share this folder” box and click on “Permissions” toward the bottom.
In the Permissions window, you can restrict access to the folder for certain accounts. To let any user have access to your folder, just give Full
Control to the Everyone user. This will allow anyone to read and write changes to the shared folder. If you would rather restrict access to
certain accounts, just remove the Everyone user and add the users you’d like to grant access to. Note: These user accounts are on the
Windows computer, not Linux.
Click OK on the Permissions and Advanced Sharing windows once you’ve made your changes. While still in the Properties menu, click on the
Security tab.
For the Linux user to have access to the shared folder, the same permissions need to be configured in this tab as what we configured in the
sharing settings. If the two settings don’t match, the most restrictive settings are the ones that will take effect. If your desired user already has
their security permissions set up (such as the geek user in our example) then you’re good to go and can click Close.
If you need to add a user, such as Everyone, click on Edit.
Click on Add in the next menu, enter the username, and click OK.
Click OK on all the open windows, and your folder should now be shared out and accessible on your Linux computer.
Accessing the Windows Share from Linux
You should be able to mount the shared folder by using the GUI in Linux, but it’s also very easy to do with the command line, and it’s easier to
show a terminal example because it will work across many different distributions.
You’ll need the cifs-utils package in order to mount SMB shares:
# sudo apt-get install cifs-utils
After that, just make a directory and mount the share to it. In this example, we will mount the folder to our Desktop for easy access.
mkdir ~/Desktop/Windows-Share
# sudo mount.cifs //WindowsPC/Share /home/geek/Desktop/Windows-Share -o user=geek
As you can see in the screenshot, we were prompted for the root password of the Linux machine, and then the password for the ‘geek’
account on Windows. After running that command, we are now able to see the contents of the Windows share and add data to it.
In case you need help understanding the mount command, here’s a breakdown:
sudo mount.cifs This is just the mount command, set to mount a CIFS (SMB) share.
WindowsPC This is the name of the Windows computer. Type “This PC” into the Start menu on Windows, right click it, and go to Properties
to see your computer name.
//Windows-PC/Share This is the full path to the shared folder.
/home/geek/Desktop/Windows-Share This is where we’d like the share to be mounted.
-o user=geek This is the Windows username that we are using to access the shared folder.
Creating the Share on Linux
To set up a shared folder on Linux for Windows to access, start with installing Samba.
# sudo apt-get install samba
After Samba installs, configure a username and password that will be used to access the share.
# smbpasswd -a geek
Note: In this example, we are using ‘geek’ since we already have a Linux user with that name but you can choose any name you’d like.
Create the directory that you’d like to share out to your Windows computer. We’re just going to put a folder on our Desktop.
mkdir ~/Desktop/Share
Now, use your favorite editor to configure the smb.conf file.
# sudo vi /etc/samba/smb.conf
Scroll down to the end of the file and add these lines:
[<folder_name>]
path = /home/<user_name>/<folder_name>
available = yes
valid users = <user_name>
read only = no
browsable = yes
public = yes
writable = yes
Obviously, you’ll need to replace some of the values with your personal settings. It should look something like this:
Save the file and close your editor. Now, restart the SMB service for the changes to take effect.
sudo service smbd restart
Your shared folder should now be accessible from a Windows PC.
Accessing the Linux Share from Windows
Now, let’s add the Linux share to our Windows Desktop. Right-click somewhere on your Desktop and go to New > Shortcut.
Type in the network location of the shared folder, with this syntax:
\\IP-ADDRESS\SHARE-NAME
If you need the IP of your Linux computer, just issue the following command:
# ifconfig
Click Next, choose a name for the Shortcut, and click Finish. You should end up with a Shortcut on your Desktop that goes right to the Linux
share.
# chmod R lchen:root /Server
Another way to share Windows Drive (directory) with VMWare Linux Server:
Install informix 11.7 on RH Linux5.8 64-bit
Create user/group: informix/informix
[root@db2cm64 home]# mkdir informix
[root@db2cm64 home]# chmod -R 755 informix
[root@db2cm64 home]# chown -R informix:informix informix
[root@db2cm64 informix]# cd /Server/informix; ./ids_install
Preparing to install...
Extracting the JRE from the installer archive...
Unpacking the JRE...
Extracting the installation resources from the installer archive...
Configuring the installer for this system's environment...
Launching installer...
Preparing CONSOLE Mode Installation...
===============================================================================
IBM Informix Software Bundle (created with InstallAnywhere)
-------------------------------------------------------------------------------
===============================================================================
Getting started with IBM Informix Software Bundle
-------------------------------------------------
InstallAnywhere will guide you through the installation of IBM Informix
Software Bundle.
Copyright IBM Corporation 1996, 2012. All rights reserved.
1. Release Notes
The Release Notes can be found in
/Server/informix/SERVER/doc/ids_unix_relnotes_11.70.html
2. Installation Guide
Please view the Installation / Quick Beginnings Guide at
/Server/informix/SERVER/doc/ids_unix_installg_11.70.pdf
3. Launch Information Center
Access the IDS 11.70 Information Center at
http://publib.boulder.ibm.com/infocenter/idshelp/v117/index.jsp
To Begin Installation,
Respond to each prompt to proceed to the next step in the installation.
If you want to change something on a previous step, type 'back'.
You may cancel this installation at any time by typing 'quit'.
PRESS <ENTER> TO CONTINUE:
===============================================================================
International License Agreement for Non-Warranted Programs
Part 1 - General Terms
BY DOWNLOADING, INSTALLING, COPYING, ACCESSING, CLICKING ON AN
"ACCEPT" BUTTON, OR OTHERWISE USING THE PROGRAM, LICENSEE AGREES TO
THE TERMS OF THIS AGREEMENT. IF YOU ARE ACCEPTING THESE TERMS ON
BEHALF OF LICENSEE, YOU REPRESENT AND WARRANT THAT YOU HAVE FULL
AUTHORITY TO BIND LICENSEE TO THESE TERMS. IF YOU DO NOT AGREE TO
THESE TERMS,
* DO NOT DOWNLOAD, INSTALL, COPY, ACCESS, CLICK ON AN "ACCEPT" BUTTON,
OR USE THE PROGRAM; AND
* PROMPTLY RETURN THE UNUSED MEDIA AND DOCUMENTATION TO THE PARTY FROM
WHOM IT WAS OBTAINED FOR A REFUND OF THE AMOUNT PAID. IF THE PROGRAM
WAS DOWNLOADED, DESTROY ALL COPIES OF THE PROGRAM.
1. Definitions
Press Enter to continue viewing the license agreement, or enter "1" to
accept the agreement, "2" to decline it, "3" to print it, or "99" to go back
to the previous screen.:
"Authorized Use" - the specified level at which Licensee is authorized
to execute or run the Program. That level may be measured by number of
users, millions of service units ("MSUs"), Processor Value Units
("PVUs"), or other level of use specified by IBM.
"IBM" - International Business Machines Corporation or one of its
subsidiaries.
"License Information" ("LI") - a document that provides information
and any additional terms specific to a Program. The Program's LI is
available at www.ibm.com/software/sla. The LI can also be found in the
Program's directory, by the use of a system command, or as a booklet
included with the Program.
"Program" - the following, including the original and all whole or
partial copies: 1) machine-readable instructions and data, 2)
components, files, and modules, 3) audio-visual content (such as
images, text, recordings, or pictures), and 4) related licensed
Press Enter to continue viewing the license agreement, or enter "1" to
accept the agreement, "2" to decline it, "3" to print it, or "99" to go back
to the previous screen.: 1
===============================================================================
Installation Goals
------------------
What do you want to accomplish?
->1- Install products and features
2- Extract the product files (-DLEGACY option)
3- Create an RPM image for redistribution
ENTER THE NUMBER FOR YOUR CHOICE, OR PRESS <ENTER> TO ACCEPT THE DEFAULT::
===============================================================================
Installation Location
---------------------
Choose location for software installation
Default Install Folder: /opt/IBM/informix
ENTER AN ABSOLUTE PATH, OR PRESS <ENTER> TO ACCEPT THE DEFAULT
:
===============================================================================
Installation Type
-----------------
Select the installation type.
Typical: Install the database server with all features and a database server that
is configured with default values. Includes:
** Client Software Development Kit (CSDK)
** Java Database Connectivity (JDBC)
Minimum disk space required: 700-800MB
Custom: Install the database server with specific features and software that you need.
Optionally install a configured database server instance.
Minimum disk space required: 75 MB (without a server instance)
->1- Typical
2- Custom
ENTER THE NUMBER FOR YOUR CHOICE, OR PRESS <ENTER> TO ACCEPT THE DEFAULT::
===============================================================================
Server Instance Creation
------------------------
Create a server instance?
->1- Yes - create an instance
2- No - do not create an instance
ENTER THE NUMBER FOR YOUR CHOICE, OR PRESS <ENTER> TO ACCEPT THE DEFAULT::
===============================================================================
International License Agreement for Non-Warranted Programs
Part 1 - General Terms
BY DOWNLOADING, INSTALLING, COPYING, ACCESSING, CLICKING ON AN
"ACCEPT" BUTTON, OR OTHERWISE USING THE PROGRAM, LICENSEE AGREES TO
THE TERMS OF THIS AGREEMENT. IF YOU ARE ACCEPTING THESE TERMS ON
BEHALF OF LICENSEE, YOU REPRESENT AND WARRANT THAT YOU HAVE FULL
AUTHORITY TO BIND LICENSEE TO THESE TERMS. IF YOU DO NOT AGREE TO
THESE TERMS,
* DO NOT DOWNLOAD, INSTALL, COPY, ACCESS, CLICK ON AN "ACCEPT" BUTTON,
OR USE THE PROGRAM; AND
* PROMPTLY RETURN THE UNUSED MEDIA AND DOCUMENTATION TO THE PARTY FROM
WHOM IT WAS OBTAINED FOR A REFUND OF THE AMOUNT PAID. IF THE PROGRAM
WAS DOWNLOADED, DESTROY ALL COPIES OF THE PROGRAM.
1. Definitions
Press Enter to continue viewing the license agreement, or enter "1" to
accept the agreement, "2" to decline it, "3" to print it, or "99" to go back
to the previous screen.: 1
===============================================================================
Installation Summary
--------------------
Please review the following before continuing:
Product Name:
IBM Informix Software Bundle
Install Folder:
/opt/IBM/informix
Product Features:
IBM Informix database server,
Base Server,
Extensions and tools,
J/Foundation,
Database extensions,
Conversion and reversion support,
XML publishing,
Demonstration database scripts,
Enterprise Replication,
Data loading utilities,
onunload and onload utilities,
dbload utility,
High-Performance Loader,
Backup and Restore,
archecker utility,
ON-Bar utility,
Informix Storage Manager,
Informix interface to Tivoli Storage Manager,
Administrative utilities,
Performance monitoring utilities,
Miscellaneous monitoring utilities,
Auditing utilities,
Database import and export utilities,
IBM Informix Client SDK,
IBM Informix Object Interface for C++,
IBM Informix Object Interface for C++ demos,
IBM Informix ESQL/C,
7.2 application compatibility module,
IBM Informix ESQL/C demos,
IBM Informix LIBDMI for client applications,
IBM Informix ODBC Driver,
IBM Informix ODBC Driver demos,
Global Language Support (GLS),
West European and Americas,
East European and Slavic,
Japanese,
Korean,
Chinese,
Thai,
IBM Informix JDBC
Server name:
ol_informix1170
Server DRDA alias:
Server number:
0
TCP/IP port number:
16697
Total instance size:
437 MB
Total memory (bufferpool + user):
129 MB
Bufferpool allocation:
97 MB
Number of processors:
1
Data storage location:
/opt/IBM/informix/ol_informix1170/dbspaces
Disk Space Information (for Installation Target):
Required: 1,048,477,120 bytes
Available: 30,246,674,432 bytes
PRESS <ENTER> TO CONTINUE:
===============================================================================
Ready To Install
----------------
InstallAnywhere is now ready to install IBM Informix Software Bundle onto your
system at the following location:
/opt/IBM/informix
PRESS <ENTER> TO INSTALL:
===============================================================================
Installing...
-------------
[==================|==================|==================|==================]
[------------------|------------------|------------------|------------------]
===============================================================================
Server Initialization
---------------------
The server will now be initialized. Initialization might take quite a while,
depending on the performance of your computer.
PRESS <ENTER> TO CONTINUE:
===============================================================================
Using the new instance
----------------------
The IBM Informix Software Bundle created a database server instance. If you
selected to initialize the instance and to display a command prompt, the
instance is ready to use.
If you selected to initialize the instance and chose not to display a command
prompt, you can go to /opt/IBM/informix on a command line and run one of the
following commands, where ol_informix1170 is the name of the path or file where
the instance is installed:
Windows:
ol_informix1170.cmd
UNIX csh:
source ol_informix1170.csh
UNIX ksh or bourne:
./ol_informix1170.ksh
If you selected to initialize the instance and it fails to run, check the
online.log file to verify that initialization was successful.
In addition, if you used an existing configuration file during the
installation, ensure that the root chunk exists, is owned by user and group
informix, and has readable and writable (rw) permission bits set for owner and
group only.
PRESS <ENTER> TO CONTINUE:
===============================================================================
Installation Complete
---------------------
Congratulations! IBM Informix Software Bundle installation is complete.
Product install status:
IBM Informix 11.70: Successful
IBM Informix Client-SDK: Successful
IBM Informix JDBC Driver: Successful
IBM OpenAdmin Tool for Informix: Successful
For more information about using Informix products, see the IBM Informix 11.70
Information Center at
http://publib.boulder.ibm.com/infocenter/idshelp/v117/index.jsp.
PRESS <ENTER> TO EXIT THE INSTALLER:
Configure Linux System for informix
1. [informix@ibmserver ~]$ cat .bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
# export PATH
. ~/ol_informix1170.ksh
2. [informix@ibmserver ~]$ cat .netrc
machine ipdev login lchen password admin12
machine ifx01 login lchen password admin12
3. [informix@ibmserver ~]$ tail /etc/services
DB2_db2inst1_2 60002/tcp
DB2_db2inst1_END 60003/tcp
db2c_db2inst1 50000/tcp
CMIC 8084/tcp
ol_informix1170 25337/tcp
dr_informix1170 32300/tcp
ipdbsvc 6800/tcp #New ipdb database instance
ardbsvc 6900/tcp #New ardb database instance
systestdbsvc 6600/tcp # system test database
Load Tables between two instance/database using unload/load utility
4. Create chunck file for informix dbspace
# mount /ix_dat
# touch /ix_dat/ix_dat.1
# touch /ix_dat/ix_dat.2
# touch /ix_dat/ix_dat.3
# touch /ix_dat/ix_dat.4
# chown R informix:informix /ix_dat
5. Create a 1G dbspace <datadbs1>
# su informix
$ onspaces -c -d datadbs1 -p /ix_dat/ix_dat.1 -o 0 -s 1000000
Verifying physical disk space, please wait ...
Space successfully added.
** WARNING ** A level 0 archive of Root DBSpace will need to be done.
6. Add other three 1G chuck file to this datadbs1 dbspace
$ onspaces -a datadbs1 -p /ix_dat/ix_dat.2 -o 0 -s 1000000
Verifying physical disk space, please wait ...
Chunk successfully added.
$ onspaces -a datadbs1 -p /ix_dat/ix_dat.3 -o 0 -s 1000000
Verifying physical disk space, please wait ...
Chunk successfully added.
$ onspaces -a datadbs1 -p /ix_dat/ix_dat.4 -o 0 -s 1000000
Verifying physical disk space, please wait ...
Chunk successfully added.
7. Drop database <sysclrdb> using dbaccess
DROP DATABASE >>
Enter the name of the database you wish to drop.
----------------------- @ol_informix1170 ------- Press CTRL-W for Help --------
sysadmin@ol_informix1170
sysclrdb@ol_informix1170
sysmaster@ol_informix1170
sysuser@ol_informix1170
sysutils@ol_informix1170
Tips: Delete this database, because I will setup a new test database exactly same with production system.
8. Create a new database using dbaccess:
database name: ip_0p
Log: No
CREATE DATABASE >> ip_0p
Enter the name you want to assign to the new database, then press Return.
------------------------------------------------ Press CTRL-W for Help --------
select the dbspace <datadbs1> as this database <ip_0p>’s db2space
Tips: chucks(files) make up dbspace, database resident on dbspaces, and tables, as well as index,routine, etc,
resident on database.
9. Create schema on the source production informix server ifx01:
$ dbschema d ip_0p ip_0p.sql
10. Ftp ip_0p.sql to Linux server db2cm64, and run this sql to setup the database ip_0p for testing
$ dbaccess ip_0p.sql
11. On production server ifx01, use dbaccess, unload Table: lii_client, lii_account and client_invoice to files and ftp
these files to Linux server db2cm64
ALTER TABLE ip_0p:informix.hs_duty_rate DROP CONSTRAINT u208_791;
UNLOAD TO "/home/lchen/ifx01.lii_client" SELECT * FROM lii_client;
UNLOAD TO "/home/lchen/ifx01.lii_account" SELECT * FROM lii_account;
UNLOAD TO "/home/lchen/ifx01.client_invoice" SELECT * FROM client_invoice;
ALTER TABLE ip_0p:informix.hs_duty_rate ADD CONSTRAINT hs_duty_rate_PK PRIMARY KEY
(hsno,hstarifftrtmnt,effdate);
12. On Linux server db2cm64, use dbaccess. load these files to tables:
LOAD FROM "/home/lchen/ifx01.client" INSERT INTO lii_client;
LOAD FROM "/home/lchen/ifx01.account" INSERT INTO lii_account;
LOAD FROM "/home/lchen/ifx01.client_invoice" INSERT INTO client_invoice;
13. Change the database to U log mode(un-buffer), which is the normal database log setting
$ ontape -s -U ip_0p
Tips: Load large file (Table), It is a good idea to change the database log mode to No Log mode
$ ontape -s -N ip_0p
14. Alter a table to turn off/on logging mode
$ dbaccess
ALTER TABLE client_invoice TYPE (RAW)
ALTER TABLE client_invoice TYPE (STANDARD)
Adjust the size of log files to prevent long transactions
Use larger log files when many users are writing to the logs at the same time. If you use small logs and long transactions
are likely to occur, reduce the high-watermark. Set the LTXHWM value to 50 and the LTXEHWM value to 60.
If the log files are too small, the database server might run out of log space while rolling back a long transaction. In this
case, the database server cannot block fast enough to add a new log file before the last one fills. If the last log file fills,
the system hangs and displays an error message. To fix the problem, shut down and restart the database server.
Add more tempdbs space to build (set) contrains, indexs for a large table.
$ onspaces -a tempdbs -p /ix_dat/ix_temp.1 -o 0 -s 1000000
Verifying physical disk space, please wait ...
Chunk successfully added.
< Run SQL in $ dbaccess >
SET CONSTRAINTS,INDEXES,TRIGGERS FOR client_invoice ENABLED;
Load Table between two instance/database using SQL
1. setup the informix environment on Linux Server.
There 3 files you should modify, so you can connect to and run sql on another instance on different servers without
prompting for username and password.
# su informix
$ ls -la
total 28
drwxr-xr-x 2 informix informix 4096 Sep 11 17:13 .
drwxr-xr-x 5 root root 4096 Sep 8 22:42 ..
-rw------- 1 informix informix 326 Sep 11 14:24 .bash_history
-rwxr-xr-x 1 informix informix 259 Sep 8 23:02 .bash_profile
-rw------- 1 informix informix 137 Sep 11 17:13 .netrc
-rw------- 1 informix informix 975 Sep 11 17:13 .viminfo
$ chmod 600 .netrc
$ more .netrc
machine ifx01 login lchen password admini@12
machine ipdev login lchen password admini@12
machine db2cm64 login lchen password admini@12
$ more /opt/IBM/informix/etc/sqlhost.ol_informix1170
ol_informix1170 onsoctcp db2cm64 ol_informix1170
dr_informix1170 drsoctcp db2cm64 dr_informix1170
ipdb onsoctcp ifx01 ipdbsvc
systestdb onsoctcp ipdev systestdbsvc
-bash-3.2$ tail -10 /etc/services
ol_informix1170 8166/tcp
dr_informix1170 15103/tcp
systestdbsvc 6600/tcp
ipdbsvc 6800/tcp
2. Run SQL in $dbaccess
SQL: New Run Modify Use-editor Output Choose Save Info Drop Exit
Run the current SQL statements.
------------ ip_0p@ol_informix1170 ------------- Press CTRL-W for Help --------
INSERT INTO b3
SELECT * FROM ip_systest@systestdb:informix.b3;
$ onstat
IBM Informix Dynamic Server Version 11.70.FC5DE -- On-Line (CKPT REQ) (LONGTX) -- Up 00:45:36 -- 173796 Kbytes
Blocked:CKPT LONGTX
$ onstat m
IBM Informix Dynamic Server Version 11.70.FC5DE -- On-Line (CKPT REQ) (LONGTX) -- Up 00:46:23 -- 173796 Kbytes
Blocked:CKPT LONGTX
Message Log File: /opt/IBM/informix/ol_informix1170.log
09:00:33 Performance Advisory: Based on the current workload, the physical log might be too small to
accommodate the time it takes to flush the buffer pool.
09:00:33 Results: The server might block transactions during checkpoints.
09:00:33 Action: If transactions are blocked during the checkpoint, increase the size of the
physical log to at least 103436 KB.
09:00:33 Performance Advisory: The physical log is too small for automatic checkpoints.
09:00:33 Results: Automatic checkpoints are disabled.
09:00:33 Action: To enable automatic checkpoints, increase the physical log to at least 103436 KB.
09:00:34 Performance Advisory: The physical log is running out of room during checkpoint processing.
09:00:34 Results: Transactions are being blocked until the checkpoint is complete.
09:00:34 Action: Increase the physical log size.
09:00:35 Checkpoint Completed: duration was 1 seconds.
09:00:35 Tue Aug 21 - loguniq 140, logpos 0xa85174, timestamp: 0xc4f7861 Interval: 1313
09:00:35 Maximum server connections 3
09:00:35 Checkpoint Statistics - Avg. Txn Block Time 0.000, # Txns blocked 1, Plog used 11316, Llog used 8661
09:00:36 Logical Log 140 Complete, timestamp: 0xc5227b0.
09:00:37 Logical Log Files are Full -- Backup is Needed
You need to Backup Log Files, change Log Tape device to /dev/null using onmonitor before you do the log backup.
$ export TERM vt200
$ onmonitor
INITIALIZATION: Make desired changes and press ESC to record changes.
Press Interrupt to abort changes. Press F2 or CTRL-F for field-level help.
DISK PARAMETERS
Page Size [ 2] Kbytes Mirror [N]
Tape Dev. [/ix_tmp/tapedev ] Block Size [ 32] Kbytes Total Tape Size [ 0] Kbytes
Log Tape Dev. [/x_tmp/ltapedev ] Block Size [ 32] Kbytes Total Tape Size [ 0] Kbytes Stage Blob [ ]
Root Name [rootdbs ] Root Size [ 200000] Kbytes
Primary Path [/opt/IBM/informix/ol_informix1170/dbspaces/rootdbs ] Root Offset [ 0] Kbytes
Mirror Path [ ] Mirror Offset [ 0] Kbytes
Phy. Log Size [ 30176] Kbytes Log. Log Size [ 10000] Kbytes Number of Logical Logs [ 14]
Enter the log tape device pathname
Tips: You can define Tape Device as above, and then use symbolic link to any device you want to use:
ln s /dev/null /ix_tmp/tapedev
ln s /dev/null /ix_tmp/ltapedev
$ ontape -a
Performing automatic backup of logical logs.
Please mount tape 1 on /opt/IBM/informix/ltapedev and press Return to continue ...
Do you want to back up the current logical log? (y/n) y
Read/Write End Of Medium enabled: blocks = 4337
Please label this tape as number 1 in the log tape sequence.
This tape contains the following logical logs:
128 - 142
Program over.
-bash-3.2$ onstat -l
IBM Informix Dynamic Server Version 11.70.FC5DE -- On-Line -- Up 01:07:05 -- 181988 Kbytes
Physical Logging
Buffer bufused bufsize numpages numwrits pages/io
P-1 48 64 30627 565 54.21
phybegin physize phypos phyused %used
2:6325 15088 8266 2376 15.75
Logical Logging
Buffer bufused bufsize numrecs numpages numwrits recs/pages pages/io
L-3 10 32 1701136 67273 3868 25.3 17.4
Subsystem numrecs Log Space used
OLDRSAM 1701128 132793268
HA 8 352
address number flags uniqid begin size used %used
4b840c50 7 U-B---- 134 3:53 4608 4608 100.00
4b840cb8 8 U-B---- 135 3:4661 4608 4608 100.00
4b840d20 3 U-B---- 136 2:53 4608 4608 100.00
4b840d88 4 U-B---- 137 1:2953 4608 4608 100.00
4b840df0 6 U-B---- 138 1:12169 4608 4608 100.00
4b840e58 13 U-B---- 139 1:36043 4608 4608 100.00
4b840ec0 14 U-B---- 140 1:40651 4608 4608 100.00
4b840f28 5 U-B---- 141 1:7561 4608 4608 100.00
4b840f90 9 U-B---- 142 3:9269 4608 4608 100.00
4dddde98 15 U---C-L 143 1:45259 4608 1262 27.39
4dd1ab48 16 A------ 0 1:49867 4608 0 0.00
4b6f9ea8 10 U-B---- 129 3:13877 4608 4608 100.00
4b6f9f10 11 U-B---- 130 3:18485 4608 4608 100.00
4b6f9f78 1 U-B---- 131 1:24475 4608 4608 100.00
4b6fa438 2 U-B---- 132 1:29083 4608 4608 100.00
4b826450 12 U-B---- 133 3:23093 4608 4608 100.00
16 active, 16 total
-bash-3.2$ onstat -m
IBM Informix Dynamic Server Version 11.70.FC5DE -- On-Line -- Up 01:09:43 -- 181988 Kbytes
Message Log File: /opt/IBM/informix/ol_informix1170.log
09:46:41 Logical Log 138 - Backup Started
09:46:41 Logical Log 138 - Backup Completed
09:46:41 Logical Log 139 - Backup Started
09:46:41 Logical Log 139 - Backup Completed
09:46:41 Logical Log 140 - Backup Started
09:46:41 Logical Log 140 - Backup Completed
09:46:41 Logical Log 141 - Backup Started
09:46:41 Logical Log 141 - Backup Completed
09:46:49 Logical Log 142 - Backup Started
09:46:49 Dynamically added log file 16 to DBspace 1
09:46:51 Checkpoint Completed: duration was 0 seconds.
09:46:51 Tue Aug 21 - loguniq 143, logpos 0x2a4, timestamp: 0xc56eecd Interval: 1316
09:46:51 Maximum server connections 3
09:46:51 Checkpoint Statistics - Avg. Txn Block Time 0.000, # Txns blocked 0, Plog used 7252, Llog
used 4606
09:46:51 Logical Log 142 - Backup Completed
09:46:53 Long Transaction 0x4b829930 Aborted. Rollback Duration: 2784 Seconds
09:46:54 Logical Log 141 Complete, timestamp: 0xc57db60.
09:46:54 Logical Log 142 Complete, timestamp: 0xc57db60.
How many locks one user thread may hold, and how many write calls the user thread have executed , if more LOCKS are
needed:
$onstat u
$ onstat -c | grep LOCKS
# LOCKS - The initial number of locks when Informix starts.
LOCKS 3000000
Tips: Load large file (Table), It is a good idea to change the database log mode to No Log mode
$ ontape -s -N ip_0p
So I have to do this again, clean the backup device and release disk space first.
$ cat /dev/null > /opt/IBM/informix/tapedev
Disconnect all session from database ip_0p and close this database.
$ ontape s N ip_0p
Please enter the level of archive to be performed (0, 1, or 2) 0
Archive failed - Error changing logging status - 'ip_0p'. iserrno 107.
Program over.
-bash-3.2$ ontape -s -N ip_0p
Please enter the level of archive to be performed (0, 1, or 2) 0
Please mount tape 1 on /opt/IBM/informix/tapedev and press Return to continue ...
10 percent done.
20 percent done.
30 percent done.
40 percent done.
50 percent done.
60 percent done.
70 percent done.
80 percent done.
100 percent done.
Read/Write End Of Medium enabled: blocks = 36226
Please label this tape as number 1 in the arc tape sequence.
This tape contains the following logical logs:
143
Program over.
Add other three 1G chuck file to logdbs
$ onspaces -a logdbs -p /ix_dat/ix_llog.1 -o 0 -s 1000000
Verifying physical disk space, please wait ...
Chunk successfully added.
$ onparams -a -d logdbs -s 900000 -i
Log operation started. To monitor progress, use the onstat -l command.
Logical log successfully added.
$ onstat -l
IBM Informix Dynamic Server Version 11.70.FC5DE -- On-Line -- Up 01:39:19 -- 181988 Kbytes
Physical Logging
Buffer bufused bufsize numpages numwrits pages/io
P-1 0 64 96094 1687 56.96
phybegin physize phypos phyused %used
2:6325 15088 14363 7 0.05
Logical Logging
Buffer bufused bufsize numrecs numpages numwrits recs/pages pages/io
L-3 0 32 3608414 140850 6284 25.6 22.4
Subsystem numrecs Log Space used
OLDRSAM 3608393 280934956
HA 21 924
address number flags uniqid begin size used %used
4b840c50 7 U------ 150 3:53 4608 4608 100.00
4b840cb8 8 U------ 151 3:4661 4608 4608 100.00
4b840d20 3 U------ 152 2:53 4608 4608 100.00
4b840d88 4 U------ 153 1:2953 4608 4608 100.00
4b840df0 6 U------ 154 1:12169 4608 4608 100.00
4b840e58 13 U------ 155 1:36043 4608 4608 100.00
4b840ec0 14 U------ 156 1:40651 4608 4608 100.00
4b840f28 5 U------ 157 1:7561 4608 4608 100.00
4b840f90 9 U------ 158 3:9269 4608 4608 100.00
4e59c330 17 U---C-L 159 1:63518 4608 887 19.25
4e59c228 19 A------ 0 12:3 450000 0 0.00
4e59c100 18 A------ 0 1:68126 4608 0 0.00
4dddde98 15 U------ 143 1:45259 4608 4608 100.00
4dd1ab48 16 U------ 144 1:49867 4608 4608 100.00
4b6f9ea8 10 U------ 145 3:13877 4608 4608 100.00
4b6f9f10 11 U------ 146 3:18485 4608 4608 100.00
4b6f9f78 1 U------ 147 1:24475 4608 4608 100.00
4b6fa438 2 U------ 148 1:29083 4608 4608 100.00
4b826450 12 U------ 149 3:23093 4608 4608 100.00
19 active, 19 total
$ onstat -d
IBM Informix Dynamic Server Version 11.70.FC5DE -- On-Line -- Up 01:40:13 -- 181988 Kbytes
Dbspaces
address number flags fchunk nchunks pgsize flags owner name
4b6fa028 1 0x60001 1 1 2048 N BA informix rootdbs
4b826558 2 0x40001 2 1 2048 N BA informix physdbs
4b826700 3 0x60001 3 2 2048 N BA informix logdbs
4b8268a8 4 0x40001 4 1 2048 N BA informix datadbs
4b826a50 5 0x48001 5 1 2048 N SBA informix sbspace
4b826bf8 6 0x42001 6 2 2048 N TBA informix tempdbs
4b826da0 7 0x40001 7 4 2048 N BA informix datadbs1
7 active, 2047 maximum
Chunks
address chunk/dbs offset size free bpages flags pathname
4b6fa1d0 1 1 0 100000 39805 PO-B--
/opt/IBM/informix/ol_informix1170/dbspaces/rootdbs
4b6fa4a0 2 2 0 25088 5339 PO-B--
/opt/IBM/informix/ol_informix1170/dbspaces/plogdbs
4b6fa6a0 3 3 0 30720 3019 PO-B--
/opt/IBM/informix/ol_informix1170/dbspaces/llogdbs
4b6fa8a0 4 4 0 25600 25547 PO-B--
/opt/IBM/informix/ol_informix1170/dbspaces/datadbs
4b6faaa0 5 5 0 16384 15205 15205 POSB--
/opt/IBM/informix/ol_informix1170/dbspaces/sbspace
Metadata 1126 837 1126
4b6faca0 6 6 0 25600 25547 PO-B--
/opt/IBM/informix/ol_informix1170/dbspaces/tempdbs
4d709028 7 7 0 500000 0 PO-B-- /ix_dat/ix_dat.1
4d709228 8 7 0 500000 405789 PO-B-- /ix_dat/ix_dat.2
4d709428 9 7 0 500000 499997 PO-B-- /ix_dat/ix_dat.3
4d709628 10 7 0 500000 499997 PO-B-- /ix_dat/ix_dat.4
4d709828 11 6 0 500000 499997 PO-B-- /ix_dat/ix_temp.1
4dd1abb0 12 3 0 500000 49997 PO-B-- /ix_dat/ix_llog.1
12 active, 32766 maximum
NOTE: The values in the "size" and "free" columns for DBspace chunks are
displayed in terms of "pgsize" of the DBspace to which they belong.
Expanded chunk capacity mode: always
Again!!!
-bash-3.2$ touch ix_llog.2
-bash-3.2$ ls -l
total 6005920
-rw-rw---- 1 informix informix 1024000000 Aug 21 10:42 ix_dat.1
-rw-rw---- 1 informix informix 1024000000 Aug 21 10:42 ix_dat.2
-rw-rw---- 1 informix informix 1024000000 Aug 17 10:36 ix_dat.3
-rw-rw---- 1 informix informix 1024000000 Aug 17 10:36 ix_dat.4
-rw-rw---- 1 informix informix 1024000000 Aug 21 10:44 ix_llog.1
-rw-rw-r-- 1 informix informix 0 Aug 21 10:44 ix_llog.2
-rw-rw---- 1 informix informix 1024000000 Aug 21 08:40 ix_temp.1
drw-rw---- 2 informix informix 16384 Aug 17 09:38 lost+found
-bash-3.2$ chmod 660 ix_llog.2
-bash-3.2$ ls -l
total 6005920
-rw-rw---- 1 informix informix 1024000000 Aug 21 10:44 ix_dat.1
-rw-rw---- 1 informix informix 1024000000 Aug 21 10:44 ix_dat.2
-rw-rw---- 1 informix informix 1024000000 Aug 17 10:36 ix_dat.3
-rw-rw---- 1 informix informix 1024000000 Aug 17 10:36 ix_dat.4
-rw-rw---- 1 informix informix 1024000000 Aug 21 10:44 ix_llog.1
-rw-rw---- 1 informix informix 0 Aug 21 10:44 ix_llog.2
-rw-rw---- 1 informix informix 1024000000 Aug 21 08:40 ix_temp.1
drw-rw---- 2 informix informix 16384 Aug 17 09:38 lost+found
-bash-3.2$ onspaces -a logdbs -p /ix_dat/ix_llog.2 -o 0 -s 1000000
Verifying physical disk space, please wait ...
Chunk successfully added.
-bash-3.2$ onparams -a -d logdbs -s 999900 -i
Log operation started. To monitor progress, use the onstat -l command.
Logical log successfully added.
$ dbaccess
SQL: New Run Modify Use-editor Output Choose Save Info Drop Exit
Run the current SQL statements.
------------ ip_0p@ol_informix1170 ------------- Press CTRL-W for Help --------
INSERT INTO b3
SELECT * FROM ip_systest@systestdb:informix.b3
WHERE EXTEND(TO_DATE(approveddate,"%Y/%m/%d %H:%M:%S"),YEAR TO SECOND) <
(EXTEND(current, YEAR TO SECOND) - INTERVAL(1) YEAR TO YEAR - INTERVAL(7) MONTH TO MONTH);
Using TEMP table to guarantee the Data insert into archive DB is exactly the same with the data deleted from the
original production resource table
To disable logging on temporary tables, set the TEMPTAB_NOLOG configuration parameter to 1.
# TEMPTAB_NOLOG - Controls the default logging mode for temporary
TEMPTAB_NOLOG 0
$ onmode -wf TEMPTAB_NOLOG=1
17:01:52 Value of TEMPTAB_NOLOG has been changed to 1.
$ onmode -wm TEMPTAB_NOLOG=1
17:02:00 Value of TEMPTAB_NOLOG has been changed to 1.
$ dbaccess
SQL: New Run Modify Use-editor Output Choose Save Info Drop Exit
Run the current SQL statements.
------------ ip_0p@ol_informix1170 ------------- Press CTRL-W for Help --------
SELECT * FROM ip_systest@systestdb:informix.b3
WHERE EXTEND(TO_DATE(approveddate,"%Y/%m/%d %H:%M:%S"),YEAR TO SECOND) >
(EXTEND(current, YEAR TO SECOND) - INTERVAL(1) YEAR TO YEAR - INTERVAL(7) MONTH TO MONTH)
INTO TEMP tmp_b3;
INSERT INTO b3 SELECT * FROM tmp_b3 t_b3
WHERE t_b3.b3iid NOT EXSITS (SELECT b3iid FROM b3);
DELETE FROM ip_systest@systestdb:informix.b3 o_b3
WHERE o_b3.b3iid IN (SELECT b3iid FROM tmp_b3);
b3: 3,021,376,578 byte
b3b: 1,070,955 byte
containers: 2,682,988 byte
status_history: 698,664,792 byte
b3_subheader: 715,614,824 byte
b3_line: 14,957,060,547 byte
b3_line_comment: 471,820 byte
b3_recap_details: 6,377,817,173 byte
TIPS: Add more io vp to tuning the IO performance
$ onmode p +10 io
$ onmode p +10 cpu
When Using TEMP table, Add more tempdbs space
Firstly, Delete formal chunck, only because demo license version cannot support so many chuncks.
-bash-3.2$ onspaces -d tempdbs -p /ix_tmp/ix_temp.1 -o 0
WARNING: Dropping a chunk.
Do you really want to continue? (y/n)y
Chunk successfully dropped.
** WARNING ** A level 0 archive for DBspace tempdbs will need to be done
before '/ix_dat/ix_temp.1' can be reused (see Dynamic Server Administrator's manual).
$ cat /dev/null > /ix_tmp/ix_temp.1
$ onspaces -a tempdbs -p /ix_tmp/ix_temp.1 -o 0 -s 4000000
Verifying physical disk space, please wait ...
Chunk successfully added.
$dbaccess
SET CONSTRAINTS,INDEXES,TRIGGERS FOR b3b DISABLED;
SET CONSTRAINTS,INDEXES,TRIGGERS FOR containers DISABLED;
SET CONSTRAINTS,INDEXES,TRIGGERS FOR status_history DISABLED;
SET CONSTRAINTS,INDEXES,TRIGGERS FOR b3 DISABLED;
Then, we drop the primary key definition from b3, and we turn off the table log of b3:
$ dbaccess
DROP INDEX <>;
ALTER TABLE b3 DROP CONSTRAINT <>
ALTER TABLE b3 TYPE (RAW)
SELECT * FROM ip_systest@systestdb:informix.b3
WHERE EXTEND(TO_DATE(approveddate,"%Y/%m/%d %H:%M:%S"),YEAR TO SECOND) >
(EXTEND(current, YEAR TO SECOND) - INTERVAL(1) YEAR TO YEAR - INTERVAL(7) MONTH TO MONTH)
INTO TEMP tmp_b3;
INSERT INTO b3 SELECT * FROM tmp_b3
CREATE INDEX <> ON b3 (b3iid);
ALTER TABLE b3 ADD CONSTRAINT primary key (b3iid);
Table altered.
SET CONSTRAINTS,INDEXES,TRIGGERS FOR b3 ENABLED;
SET CONSTRAINTS,INDEXES,TRIGGERS FOR status_history ENABLED;
SET CONSTRAINTS,INDEXES,TRIGGERS FOR containers ENABLED;
SET CONSTRAINTS,INDEXES,TRIGGERS FOR b3b ENABLED;
ALTER TABLE b3 TYPE (standard)
alter table "informix".containers add constraint (foreign key
(b3iid) references "informix".b3 );
alter table "informix".containers add b3b (foreign key
(b3iid) references "informix".b3 );
alter table "informix".containers add status_history (foreign key
(b3iid) references "informix".b3 );
After I load data from Add Primary key CONSTRAINT to TABLE b3 column (b3iid);
$ dbaccess
SQL: New Run Modify Use-editor Output Choose Save Info Drop Exit
Run the current SQL statements.
------------ ip_0p@ol_informix1170 ------------- Press CTRL-W for Help --------
SELECT * FROM ip_systest@systestdb:informix.b3
WHERE EXTEND(TO_DATE(approveddate,"%Y/%m/%d %H:%M:%S"),YEAR TO SECOND) <
(EXTEND(current, YEAR TO SECOND) - INTERVAL(1) YEAR TO YEAR - INTERVAL(7) MONTH TO MONTH)
INTO TEMP tmp_b3;
option
SELECT * FROM ip_systest@systestdb:informix.b3
WHERE approveddate >= '2011/03/01' and approveddate < '2011/04/01'
INTO TEMP tmp_b3;
option
DELETE FROM b3 WHERE b3iid IN (SELECT b3iid FROM tmp_b3)
180162 row(s) deleted.
To solve log files space issue:
$ onstat -c | grep LTX
# LTXHWM - The percentage of the logical logs that can be
# LTXEHWM - The percentage of the logical logs that have been
# LTXHWM and LTXEHWM because the server can add new logical logs
# If dynamic logging is off, set LTXHWM and LTXEHWM to
# When using Enterprise Replication, set LTXEHWM to at least 30%
# higher than LTXHWM to minimize log overruns.
LTXHWM 70
LTXEHWM 80
$ onmode -wm LTXEHWM=100
09:58:27 Value of LTXEHWM has been changed to 100.
$ onmode -wf LTXEHWM=100
09:58:37 Value of LTXEHWM has been changed to 100.
$ onmode -wm LTXHWM=100
09:58:52 Value of LTXHWM has been changed to 100.
$ onmode -wf LTXHWM=100
09:58:58 Value of LTXHWM has been changed to 100.
Turn on database ip_0p log mode(you may need to set instance to single user mode)
$oninit -s
or $onmode -s
$ ontape -s -U ip_0p
Please enter the level of archive to be performed (0, 1, or 2) 0
Please mount tape 1 on /ix_tmp/tapedev and press Return to continue ...
10 percent done.
20 percent done.
30 percent done.
40 percent done.
50 percent done.
60 percent done.
70 percent done.
80 percent done.
90 percent done.
100 percent done.
Read/Write End Of Medium enabled: blocks = 134992
Please label this tape as number 1 in the arc tape sequence.
This tape contains the following logical logs:
17
Program over.
$dbaccess
SQL: New Run Modify Use-editor Output Choose Save Info Drop Exit
Run the current SQL statements.
------------ ip_0p@ol_informix1170 ------------- Press CTRL-W for Help --------
SELECT * FROM ip_systest@systestdb:informix.b3
WHERE EXTEND(TO_DATE(approveddate,"%Y/%m/%d %H:%M:%S"),YEAR TO SECOND) <
(EXTEND(current, YEAR TO SECOND) - INTERVAL(1) YEAR TO YEAR - INTERVAL(7) MONTH TO MONTH)
INTO TEMP tmp_b3;
INSERT INTO b3 SELECT * FROM tmp_b3 WHERE b3iid NOT IN (select b3iid from b3);
180162 row(s) inserted.
INSERT INTO b3b SELECT * FROM ip_systest@systestdb:informix.b3b
INSERT INTO containers SELECT * FROM ip_systest@systestdb:informix.containers
INSERT INTO status_history SELECT * FROM ip_systest@systestdb:informix.status_history
INSERT INTO containers SELECT * FROM ip_systest@systestdb:informix.containers
WHERE b3iid NOT IN (SELECT b3iid from containers)
Insert large table piece by piece using rowid
$dbaccess
SQL: New Run Modify Use-editor Output Choose Save Info Drop Exit
Run the current SQL statements.
------------ ip_0p@ol_informix1170 ------------- Press CTRL-W for Help --------
insert into b3 select * from ip_systest@systestdb:informix.b3
where rowid >5000000 and rowid < 15000000
create trigger "informix".td_b3 delete on "informix".b3 referencing old as old_del for each row
(
execute procedure "informix".pd_b3(old_del.b3iid ));
create procedure "informix".pd_b3(old_b3iid integer)
define errno integer;
define errmsg char(255);
define numrows integer;
-- Delete all children in "b3_subheader"
delete from b3_subheader
where b3iid = old_b3iid;
-- Delete all children in "b3b"
delete from b3b
where b3iid = old_b3iid;
-- Delete all children in "status_history"
delete from status_history
where b3iid = old_b3iid;
-- Delete all children in "containers"
delete from containers
where b3iid = old_b3iid;
end procedure;
create procedure "informix".pd_b3_subheader(old_b3subiid integer)
define errno integer;
define errmsg char(255);
define numrows integer;
-- Delete all children in "b3_line"
delete from b3_line
where b3subiid = old_b3subiid;
end procedure;
create procedure "informix".pd_b3_line(old_b3lineiid integer)
define errno integer;
define errmsg char(255);
define numrows integer;
-- Delete all children in "b3_recap_details"
delete from b3_recap_details
where b3lineiid = old_b3lineiid;
-- Delete all children in "b3_line_comment"
delete from b3_line_comment
where b3lineiid = old_b3lineiid;
end procedure;
create procedure "informix".pd_rpt_b3(old_b3iid integer)
define errno integer;
define errmsg char(255);
define numrows integer;
-- Delete all children in "rpt_b3_subheader"
delete from rpt_b3_subheader
where b3iid = old_b3iid;
end procedure;
create procedure "informix".pi_b3(new_liiclientno integer,
new_liiaccountno integer)
define errno integer;
define errmsg char(255);
define numrows integer;
-- Parent "lii_account" must exist when inserting a child in "b3"
if new_liiclientno is not null and
new_liiaccountno is not null then
select count(*)
into numrows
from lii_account
where liiclientno = new_liiclientno
and liiaccountno = new_liiaccountno;
if (numrows = 0) then
let errno = -1002;
let errmsg = "Parent does not exist in ""lii_account"". Cannot create child in ""b3"".";
raise exception -746, 0, errmsg;
end if;
end if;
end procedure;
create procedure "informix".pi_b3b(new_b3iid integer)
define errno integer;
define errmsg char(255);
define numrows integer;
-- Parent "b3" must exist when inserting a child in "b3b"
if new_b3iid is not null then
select count(*)
into numrows
from b3
where b3iid = new_b3iid;
if (numrows = 0) then
let errno = -1002;
let errmsg = "Parent does not exist in ""b3"". Cannot create child in ""b3b"".";
raise exception -746, 0, errmsg;
end if;
end if;
end procedure;
Synchronize tables between production table with development table, which has a unique constraint with two column
$dbaccess
SQL: New Run Modify Use-editor Output Choose Save Info Drop Exit
Run the current SQL statements.
----------------------- ip_systest@systestdb --- Press CTRL-W for Help --------
insert into lii_client select * from ip_0p@ipdb:informix.lii_client
where liiclientno NOT IN (select liiclient from lii_client);
insert into lii_account select * from ip_0p@ipdb:informix.lii_account r
where (select count(*) from lii_account l
where r.liiclientno=l.liiclientno and r.liiaccountno=l.liiaccountno)
= 0;
Archive and Purge B3 Table
$dbaccess
------------ ip_0p@ol_informix1170 ------------- Press CTRL-W for Help --------
drop procedure archiveandpurge()
------------ ip_0p@ol_informix1170 ------------- Press CTRL-W for Help --------
drop PROCEDURE insertarch
-bash-3.2$ dbaccess ip_0p@ol_informix1170 < insertarch.sql
-bash-3.2$ dbaccess ip_0p@ol_informix1170 < archiveandpurge.sql
Database selected.
Routine created.
Database closed.
------------ ip_0p@ol_informix1170 ------------- Press CTRL-W for Help --------
CREATE PROCEDURE "informix".archiveandpurge() RETURNING CHAR(20), CHAR(20), INT;
--Define Working variables
DEFINE startdate CHAR(20);
DEFINE enddate CHAR(20);
DEFINE archivecount INT;
DEFINE archiveDay DATE;
LET startdate = EXTEND(current, YEAR TO MONTH) - INTERVAL(1) YEAR TO YEAR - INTERVAL(7) MONTH TO MONTH;
LET enddate = EXTEND(current, YEAR TO MONTH) - INTERVAL(1) YEAR TO YEAR - INTERVAL(6) MONTH TO MONTH;
LET archiveDay = TODAY;
EXECUTE PROCEDURE insertArch(startdate, enddate);
SELECT COUNT(*)
INTO archivecount
FROM reporterr
WHERE currentday = archiveDay;
IF archivecount = 0 THEN
-- EXECUTE PROCEDURE deleteB3(startdate, enddate);
END IF
RETURN startdate, enddate, archivecount;
END PROCEDURE;
CREATE PROCEDURE "informix".insertarch(startdate CHAR(20),enddate CHAR(20))
-- Declare b3 table columns
DEFINE s_b3iid INT;
DEFINE s_liiclientno INT;
DEFINE s_liiaccountno INT;
DEFINE s_liibrchno INT;
DEFINE s_liirefno INT;
DEFINE s_acctsecurno INT;
DEFINE s_b3type CHAR(2);
DEFINE s_cargcntrlno CHAR(25);
DEFINE s_carriercode CHAR(4);
DEFINE s_createdate CHAR(20);
DEFINE s_custoff CHAR(4);
DEFINE s_k84date CHAR(20);
DEFINE s_modetransp CHAR(2);
DEFINE s_portunlading CHAR(4);
DEFINE s_reldate CHAR(20);
DEFINE s_status INT;
DEFINE s_totb3duty float;
DEFINE s_totb3exctax float;
DEFINE s_totb3gst float;
DEFINE s_totb3sima float;
DEFINE s_totb3vfd float;
DEFINE s_transno INT;
DEFINE s_weight INT;
DEFINE s_purchaseorder1 CHAR(15);
DEFINE s_purchaseorder2 CHAR(15);
DEFINE s_shipvia CHAR(18);
DEFINE s_locationofgoods CHAR(17);
DEFINE s_containerno CHAR(20);
DEFINE s_vendorname CHAR(25);
DEFINE s_vendorstate CHAR(3);
DEFINE s_vendorzip CHAR(10);
DEFINE s_freight float;
DEFINE s_usportexit CHAR(5);
DEFINE s_billoflading CHAR(10);
DEFINE s_cargcntrlqty float;
DEFINE s_approveddate CHAR(20);
--Define Working variables
DEFINE tableName CHAR(25);
DEFINE currentDay DATE;
DEFINE mode CHAR(1);
DEFINE sqlErr INT;
DEFINE isamErr INT;
-- Trap Exception
ON EXCEPTION SET sqlErr, isamErr
CALL reportErr(currentDay,tableName,mode, s_b3iid, sqlErr,isamErr);
END EXCEPTION WITH RESUME;
SET LOCK MODE TO WAIT 60;
LET currentDay = today;
LET tableName = 'B3';
LET mode = 'I';
LET s_b3iid = NULL;
FOREACH WITH HOLD
SELECT b3iid, liiclientno, liiaccountno, liibrchno, liirefno, acctsecurno, b3type,
cargcntrlno, carriercode, createdate, custoff, k84date, modetransp,
portunlading, reldate, status, totb3duty, totb3exctax, totb3gst,
totb3sima, totb3vfd, transno, weight, purchaseorder1, purchaseorder2,
shipvia, locationofgoods, containerno, vendorname, vendorstate, vendorzip,
freight, usportexit, billoflading, cargcntrlqty, approveddate
INTO s_b3iid, s_liiclientno, s_liiaccountno, s_liibrchno, s_liirefno, s_acctsecurno
,
s_b3type, s_cargcntrlno, s_carriercode, s_createdate, s_custoff, s_k84date,
s_modetransp, s_portunlading, s_reldate, s_status, s_totb3duty,
s_totb3exctax, s_totb3gst, s_totb3sima, s_totb3vfd, s_transno, s_weight,
s_purchaseorder1, s_purchaseorder2, s_shipvia, s_locationofgoods, s_containerno,
s_vendorname, s_vendorstate, s_vendorzip, s_freight, s_usportexit,
s_billoflading, s_cargcntrlqty, s_approveddate
FROM ip_0p@ipdb:informix.b3
-- WHERE approveddate >= '2011/03' and approveddate < '2011/04'
WHERE approveddate >= startdate and approveddate < enddate
BEGIN
-- Trap Exception
ON EXCEPTION SET sqlErr, isamErr
CALL reportErr(currentDay,tableName,mode, s_b3iid, sqlErr,isamErr);
END EXCEPTION WITH RESUME;
insert into b3
values(s_b3iid, s_liiclientno, s_liiaccountno, s_liibrchno, s_liirefno, s_acctsecurn
o,
s_b3type, s_cargcntrlno, s_carriercode, s_createdate, s_custoff, s_k84date,
s_modetransp, s_portunlading, s_reldate, s_status, s_totb3duty,
s_totb3exctax, s_totb3gst, s_totb3sima, s_totb3vfd, s_transno, s_weight,
s_purchaseorder1, s_purchaseorder2, s_shipvia, s_locationofgoods, s_containerno,
s_vendorname, s_vendorstate, s_vendorzip, s_freight, s_usportexit,
s_billoflading, s_cargcntrlqty, s_approveddate);
END
END FOREACH;
END PROCEDURE;
------------ ip_0p@ol_informix1170 ------------- Press CTRL-W for Help --------
select count(*) from ip_0p@ipdb:informix.b3
where approveddate like "2011/04/%"
(count(*))
275047
------------ ip_0p@ol_informix1170 ------------- Press CTRL-W for Help --------
execute procedure insertarch(‘2011/03’,’2011/04’)
-bash-3.2$ cd /home/informix/scripts/local/b3_arch
-bash-3.2$ . ./autoArchive.ksh
Database selected.
(expression) (expression) (expression)
2011/02/01 00:00:00 2011/03/01 00:00:00 1
1 row(s) retrieved.
Database closed.
You have mail in /var/spool/mail/root
[lchen@ifx01 /home/lchen] $ lspv
hdisk2 00ca32fde4198d51 livedbvg active
hdisk3 00ca32fde4198fc0 archdbvg active
hdisk4 00ca32fde41a128f appsvg active
hdisk0 00ca32fd35a97b39 rootvg active
hdisk1 00ca32fd35a97d46 rootvg active
[lchen@ifx01 /home/lchen] $ lsvg archdbvg
VOLUME GROUP: archdbvg VG IDENTIFIER: 00ca32fd00004c00000001101750a843
VG STATE: active PP SIZE: 256 megabyte(s)
VG PERMISSION: read/write TOTAL PPs: 399 (102144 megabytes)
MAX LVs: 256 FREE PPs: 4 (1024 megabytes)
LVs: 9 USED PPs: 395 (101120 megabytes)
OPEN LVs: 9 QUORUM: 2 (Enabled)
TOTAL PVs: 1 VG DESCRIPTORS: 2
STALE PVs: 0 STALE PPs: 0
ACTIVE PVs: 1 AUTO ON: yes
MAX PPs per VG: 32512
MAX PPs per PV: 1016 MAX PVs: 32
LTG size (Dynamic): 256 kilobyte(s) AUTO SYNC: no
HOT SPARE: no BB POLICY: relocatable
PV RESTRICTION: none
[lchen@ifx01 /home/lchen] $ lsvg -l archdbvg
archdbvg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
achrootlv jfs2 1 1 1 open/syncd /ach_root
achploglv jfs2 1 1 1 open/syncd /ach_plog
achlloglv jfs2 4 4 1 open/syncd /ach_llog
achdat1lv jfs2 172 172 1 open/syncd /ach_dat1
achdat2lv jfs2 184 184 1 open/syncd /ach_dat2
achidx1lv jfs2 12 12 1 open/syncd /ach_idx1
achidx2lv jfs2 12 12 1 open/syncd /ach_idx2
achtemplv jfs2 8 8 1 open/syncd /ach_temp
loglv01 jfs2log 1 1 1 open/syncd N/A
[lchen@ifx01 /home/lchen] $ df -k
Filesystem 1024-blocks Free %Used Iused %Iused Mounted on
/dev/hd4 2883584 2641184 9% 11947 2% /
/dev/hd2 8126464 5121204 37% 71000 6% /usr
/dev/hd9var 3145728 566632 82% 10547 8% /var
/dev/hd3 5242880 4290560 19% 203 1% /tmp
/dev/hd1 2621440 2140584 19% 2275 1% /home
/proc - - - - - /proc
/dev/hd10opt 7864320 7178244 9% 11894 1% /opt
/dev/ibmlv 10485760 10187912 3% 3564 1% /ibm
/dev/achrootlv 262144 11776 96% 5 1% /ach_root
/dev/netinslv 2621440 2620700 1% 4 1% /netins
/dev/dmqjtmplv 13107200 3334044 75% 1735 1% /dmqjtmp
/dev/recyclelv 15728640 6690080 58% 6064 1% /recyclebox
/dev/achlloglv 1048576 48088 96% 5 1% /ach_llog
/dev/achdat1lv 45088768 1081536 98% 48 1% /ach_dat1
/dev/achdat2lv 48234496 226784 100% 52 1% /ach_dat2
/dev/achidx1lv 3145728 144920 96% 7 1% /ach_idx1
/dev/achidx2lv 3145728 144920 96% 7 1% /ach_idx2
/dev/achtemplv 2097152 72504 97% 6 1% /ach_temp
/dev/appslv 10485760 6351088 40% 20368 2% /usr/apps
/dev/achploglv 262144 11776 96% 5 1% /ach_plog
/dev/ixrootlv 262144 46576 83% 5 1% /ix_root
/dev/ixploglv 262144 5776 98% 5 1% /ix_plog
/dev/ixlloglv 1048576 48280 96% 5 1% /ix_llog
/dev/ixdat1lv 23068672 1064760 96% 26 1% /ix_dat1
/dev/ixdat2lv 26214400 1209968 96% 29 1% /ix_dat2
/dev/ixdat3lv 19922944 919572 96% 23 1% /ix_dat3
/dev/ixidx1lv 7340032 338556 96% 11 1% /ix_idx1
/dev/ixidx2lv 5242880 241732 96% 9 1% /ix_idx2
/dev/ixidx3lv 4194304 193336 96% 8 1% /ix_idx3
/dev/ixtemplv 4194304 193336 96% 8 1% /ix_temp
/dev/insightlv 2097152 1987312 6% 3050 1% /insight
/dev/livedump 262144 261776 1% 4 1% /var/adm/ras/livedump
/dev/hd11admin 524288 523864 1% 5 1% /admin
Dbspaces
address number flags fchunk nchunks pgsize flags owner name
50431810 1 0x1 1 1 4096 N informix rootdbs
5051dd50 2 0x1 2 1 4096 N informix llogdbs
5051deb0 3 0x1 3 2 4096 N informix tempdbs1
5138a018 4 0x1 4 1 4096 N informix plogdbs
5138a178 5 0x1 5 44 4096 N informix datadbs1
5138a2d8 6 0x1 27 48 4096 N informix datadbs2
5138a438 7 0x1 51 3 4096 N informix indxdbs1
5138a598 8 0x1 54 3 4096 N informix indxdbs2
< 51390928 52 7 0 250000 1698 PO-- /ach_idx1/ach_idx1.2
< 51390af8 53 7 0 250000 249997 PO-- /ach_idx1/ach_idx1.3
< 51390cc8 54 8 0 250000 177497 PO-- /ach_idx2/ach_idx2.1
---
> 51390928 52 7 0 250000 162 PO-- /ach_idx1/ach_idx1.2
> 51390af8 53 7 0 250000 245901 PO-- /ach_idx1/ach_idx1.3
> 51390cc8 54 8 0 250000 176857 PO-- /ach_idx2/ach_idx2.1
117,119c117,119
< 51399928 100 6 0 250000 182409 PO-- /ach_dat2/ach_dat2.47
< 51399af8 101 6 0 250000 249997 PO-- /ach_dat2/ach_dat2.48
< 51399cc8 102 5 0 250000 211597 PO-- /ach_dat1/ach_dat1.43
---
> 51399928 100 6 0 250000 34945 PO-- /ach_dat2/ach_dat2.47
> 51399af8 101 6 0 250000 184461 PO-- /ach_dat2/ach_dat2.48
> 51399cc8 102 5 0 250000 45709 PO-- /ach_dat1/ach_dat1.43
INFO - b3: Columns Indexes Privileges References Status cOnstraints triGgers Table Fragments Exit
Display fragment strategy for a table.
----------------------- ip_arch03@ardb --------- Press CTRL-W for Help --------
Idx/Tbl name Dbspace Partition Type Expression
199_649 datadbs1 datadbs1 I
b3_rk1 indxdbs1 indxdbs1 I
b3_rk10 indxdbs2 indxdbs2 I
b3_rk2 indxdbs2 indxdbs2 I
b3_rk3 indxdbs1 indxdbs1 I
b3_rk5 indxdbs1 indxdbs1 I
b3_rk9 indxdbs1 indxdbs1 I
INFO - b3_subheader: Columns Indexes Privileges References Status cOnstraints triGgers Table Fragments Exit
Display fragment strategy for a table.
----------------------- ip_arch03@ardb --------- Press CTRL-W for Help --------
Idx/Tbl name Dbspace Partition Type Expression
200_697 datadbs1 datadbs1 I
b3_subheader_rk1 indxdbs1 indxdbs1 I
INFO - b3_line: Columns Indexes Privileges References Status cOnstraints triGgers Table Fragments Exit
Display fragment strategy for a table.
----------------------- ip_arch03@ardb --------- Press CTRL-W for Help --------
Idx/Tbl name Dbspace Partition Type Expression
201_711 datadbs2 datadbs2 I
201_841 datadbs2 datadbs2 I
INFO - b3_recap_details: Columns Indexes Privileges References Status cOnstraints triGgers Table Fragments Exit
Display fragment strategy for a table.
----------------------- ip_arch03@ardb --------- Press CTRL-W for Help --------
Idx/Tbl name Dbspace Partition Type Expression
202_753 datadbs1 datadbs1 I
202_842 datadbs1 datadbs1 I
INFO - b3_line_comment: Columns Indexes Privileges References Status cOnstraints triGgers Table Fragments Exit
Display fragment strategy for a table.
----------------------- ip_arch03@ardb --------- Press CTRL-W for Help --------
Idx/Tbl name Dbspace Partition Type Expression
153_424 datadbs2 datadbs2 I
153_837 datadbs2 datadbs2 I
INFO - b3_line_iid: Columns Indexes Privileges References Status cOnstraints triGgers Table Fragments Exit
Display fragment strategy for a table.
----------------------- ip_arch03@ardb --------- Press CTRL-W for Help --------
Idx/Tbl name Dbspace Partition Type Expression
118_113 datadbs2 datadbs2 I
$ dbschema -d ip_systest -ss ip_systest.sql
The dbschema -ss option generates server-specific information. In all Informix® database servers except SE, the -ss option always generates the
lock mode, extent sizes, and the dbspace name if the dbspace name is different from the database dbspace. In addition, if tables are fragmented,
the -ss option displays information about the fragmentation strategy.
NIM
Setup NIM Environment
install NIM master fileset on NIM Server
Install and Update from ALL Available Software
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[TOP] [Entry Fields]
* Installation Target master
* LPP_SOURCE lpp_souceAll
* Software to Install [bos.adt, bos.nim > +
How NIM Works
NIM Server providse OS/software needed by clients, through network; Sample steps using
NIM Server to install OS on a standalone client, same thing if you use NIM Server to
install software to Client, and/or backup client data using mksysb/savevg to NIM Server
Define NIM Resource
Nim Server hold NIM DB, The NIM database is stored in the AIX Object Data Management
(ODM) repository on the NIM master and is divided into four classes: machines, networks,
resources, groups:
Define lpp_source resource, Copy AIX DVD/image to NIM Server filesystem:
1. copy software packaged as CD Image(ios format) to NIM Resource Server
directory /export/lpp_source, define as lpp_source:
#loopmount -i AIX6.1BaseTL05SP06_DVD1.iso -o "-V cdrfs -o ro" -m /mnt
#cd /mnt
#find . -print | cpio -pdl /export/lpp_source/lpp_sourceAll
#inutoc .
#gencopy -d /recyclebox/aix6.1-tl6-sp7 -U all
2. Copy from CD to NIM Resource Server
1. Place CD into the CD-ROM drive.
2. Enter # smit bffcreate
3. You can create your own bff installation package using mkinstallp to make
lpp_source
Use mkinstallp to create bff package for AIX “smitty installp”
root@ifx01:/cgi #cat cgi.template
Package Name: CGIMIGRATION
Package VRMF: 1.0.0.0
Update: N
Fileset
Fileset Name: CGIMIGRATION.rte
Fileset VRMF: 1.0.0.0
Fileset Description: CGIMIGRATION
Bosboot required: N
License agreement acceptance required: N
Include license files in this package: N
Requisites:
ROOT Part: Y
ROOTFiles
/bin
/bin/cgi.backup
/bin/cgi.comment
/bin/cgi.crfs
/bin/cgi.db2backup
/bin/cgi.delete
/bin/cgi.ftp
/bin/cgi.idsbackup
/bin/cgi.idsrestore
/bin/cgi.mkchunk
EOROOTFiles
EOFileset
root@ifx01:/cgi #mkinstallp -T /cgi/cgi.template -d /cgi
root@ifx01:/cgi/tmp #restore -qTvf CGIMIGRATION.1.0.0.0.bff
Copy this new created “CGIMIGRATION.1.0.0.0.bff” to /export/lpp_source/cgitools
Create content of table for the directory:
#cd /export/lpp_source/cgimiragtiontools
#inutoc.
create new lpp_source named “ cgimigrationtool”
Then, you can install this “cgimgrationtools” on any machines with NIM
#smitty nim
Manage Resources
Move cursor to desired item and press Enter.
List All Network Install Resources
Define a Resource
Change/Show Characteristics of a Resource
Show the Contents of a Resource
Remove a Resource
Perform Operations on Resources
Verify Resources
Define a Resource
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
* Resource Name [lpp_souceAll]
* Resource Type lpp_source
* Server of Resource [master] +
* Location of Resource [/export/lpp_source/lp> /
NFS Client Security Method [] +
NFS Version Access [] +
Architecture of Resource [] +
Source of Install Images [] +/
Names of Option Packages []
Show Progress [yes] +
Comments []
Command: OK stdout: yes stderr: no
Before command completion, additional instructions may appear below.
Preparing to copy install images (this will take several minutes)...
Now checking for missing install images...
All required install images have been found. This lpp_source is now ready.
Create s SPOT resource from lpp_source or mksysb by installing needed software in SPOT
for NIM operation.
Define a Resource
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[TOP] [Entry Fields]
* Resource Name [spotAll]
* Resource Type spot
* Server of Resource [master] +
Source of Install Images [lpp_souceAll] +
* Location of Resource [/export/spot/spotAll] /
NFS Client Security Method [] +
NFS Version Access [] +
Expand file systems if space needed? yes +
Comments []
installp Flags
PREVIEW only? (install operation will NOT occur) no +
COMMIT software updates? no +
[MORE...4]
Then, you can install/migrate/backup/restore any machine managed by NIM Server
Backup Client Mksysb to NIM Server by creating mksysb from client
Manage Resources
Move cursor to desired item and press Enter.
List All Network Install Resources
Define a Resource
Change/Show Characteristics of a Resource
Show the Contents of a Resource
Remove a Resource
Perform Operations on Resources
Verify Resources
Manage Resources
Mo+--------------------------------------------------------------------------+
| Resource Type |
| |
| Move cursor to desired item and press Enter. Use arrow keys to scroll. |
| |
| [MORE...8] |
| lpp_source = source device for optional product images |
| installp_bundle = an installp bundle file |
| fix_bundle = fix (keyword) input file for the cust or fix_query o |
| bosinst_data = config file used during base system installation |
| image_data = config file used during base system installation |
| vg_data = config file used during volume group restoration |
| mksysb = a mksysb image |
| script = an executable file which is executed on a client |
| resolv_conf = configuration file for name-server information |
| savevg = a savevg image |
| [MORE...10] |
| |
| F1=Help F2=Refresh F3=Cancel |
| Esc+8=Image Esc+0=Exit Enter=Do |
F1| /=Find n=Find Next |
Es+--------------------------------------------------------------------------+
Define a Resource
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[TOP] [Entry Fields]
* Resource Name [mksysb_lms]
* Resource Type mksysb
* Server of Resource [master] +
* Location of Resource [/export/mksysb/mksysb> /
NFS Client Security Method [] +
NFS Version Access [] +
Comments []
Source for Replication [] +
-OR-
System Backup Image Creation Options:
CREATE system backup image? yes +
NIM CLIENT to backup [admsrv2] +
[MORE...14]
Install OS from mksysb using NIM, the best practice is to un-define disks from SAN, and
run cfgmgr to configure SAN disks after OS installation finished.
and more, Check following OS level directory/file,defined bos_inst resources for a
defined machine should be there,
1. /etc/bootp.conf
2. /tftpboot
Manage NIM resource
Network Installation Manager
Move cursor to desired item and press Enter.
Configure the NIM Environment
Perform NIM Software Installation and Maintenance Tasks
Perform NIM Administration Tasks
Create IPL ROM Emulation Media
NIM POWER5 Tools
Thin Server Maintenance
Perform NIM Administration Tasks
Move cursor to desired item and press Enter.
Manage Networks
Manage Machines
Manage Control Objects
Manage Resources
Manage Groups
Backup/Restore the NIM Database
Configure NIM Environment Options
Rebuild the niminfo File on the Master
Change the Master's Primary Interface
Manage Alternate Master Environment
Unconfigure NIM
Manage Network Install Resource Allocation
Move cursor to desired item and press Enter.
List Allocated Network Install Resources
Allocate Network Install Resources
Deallocate Network Install Resources
+--------------------------------------------------------------------------+
| Target Name |
| |
| Move cursor to desired item and press Enter. |
| |
| master machines master |
| admsrv2 machines standalone |
| admsrv1 machines standalone |
| |
| F1=Help F2=Refresh F3=Cancel |
| Esc+8=Image Esc+0=Exit Enter=Do |
F1| /=Find n=Find Next |
Es+--------------------------------------------------------------------------+
Manage Network Install Resource Allocation
Mo+--------------------------------------------------------------------------+
| Available Network Install Resources |
| |
| Move cursor to desired item and press Esc+7. |
| ONE OR MORE items can be selected. |
| Press Enter AFTER making all selections. |
| |
| [MORE...39] |
| vac-aix50 installp_bundle |
| vacpp-aix50 installp_bundle |
| wsm_remote installp_bundle |
| bid_ow bosinst_data |
| hacmp_source lpp_source |
| mksysb_lms mksysb |
| > lpp_souceAll lpp_source |
| > spotAll spot |
| [BOTTOM] |
| |
| F1=Help F2=Refresh F3=Cancel |
| Esc+7=Select Esc+8=Image Esc+0=Exit |
F1| Enter=Do /=Find n=Find Next |
Es+--------------------------------------------------------------------------+
Install/Migrate OS/software
Migration NIM Server using CD AXI6.1,
Cloning rootvg with alt_disk_install
{nimmast}:/ # unmirrorvg -c1 rootvg hdisk1
{nimmast}:/ # chpv -c hdisk1
{nimmast}:/ # lspv -l hdisk1 ; migratepv hdisk1 hdisk0 (if required)
{nimmast}:/ # lspv -l hdisk1
{nimmast}:/ # reducevg rootvg hdisk1
{nimmast}:/ # lsvg -p rootvg
{nimmast}:/ # bosboot -a -d /dev/hdisk0
{nimmast}:/ # bootlist -m normal hdisk0
{nimmast}:/ # bootlist -m normal -o
hdisk0
{nimmast}:/ # alt_disk_install -B -C hdisk1
Perform migration installation of AI6.1. We are now ready to execute the AIX migration. At this point in our case, we must go to the NIM master’s
console (via the HMC) and prepare to migrate via CD.
Insert AIX6.1 Installation CD Volume 1 into the CD drive.
Follow the procedure in the AIX installation and migration guide to migrate via media to AIX6.1
After the migration is finished, remove the AIX 5L V5.3 Installation Volume 1 CD from the CD-ROM drive.
Check the system configuration, for example, oslevel, disk, network, AIX error report, and so on.
Clean up old AIX 5.2 filesets. It may be necessary to remove old AIX 5.2 filesets after the migration.
Check the NIM environment. In our case, we perform some quick tests to verify the NIM environment after the migration. Using the lsnim command,
we verify that the NIM database is intact. We check the state of the master and clients. We also validate some of our NIM resources.
Check the NIM database:
{nimmast}:/ # lsnim
Check the status of the NIM master:
{nimmast}:/ # lsnim -a Cstate -a Mstate master
Check the status of a NIM client:
{nimmast}:/ # lsnim -a Cstate -a Mstate LPAR4
Validate the NIM resources:
{nimmast}:/ # nim -o check LPP_52_ML8
{nimmast}:/ # nim -o check SPOT_52_ML8
Build an AIX6.1 lpp_source and SPOT.
After NIM Server Migrated successfully, We will migrate a system from AIX 5.3 to AIX 6.1
using nimadm.
The NIM master in this environment is running AIX 6.1 TL3 SP2. Our NIM client name is aix1 (running AIX 5.3 TL7 SP5 and migrating to AIX 6.1
TL3 SP1) and the NIM masters name is nim1.
Ensure that you read the AIX 6.1 release notes and review the documented requirements such as the amount of free disk space required.
Prior to a migration, it is always a good idea to run the pre_migration script on the system to catch any issues that may prevent the migration from
completing successfully. You can find this script on the AIX 6.1 installation media.
Run this script, review the output (in /home/pre_migration), and correct any issues that it reports before migrating.
#./pre_migration
All saved information can be found in: /home/pre_migration.090903105452
Checking size of boot logical volume (hd5).
Your rootvg has mirrored logical volumes (copies greater than 1)
Recommendation: Break existing mirrors before migrating.
Listing software that will be removed from the system.
Listing configuration files that will not be merged.
Listing configuration files that will be merged.
Saving configuration files that will be merged.
Running lppchk commands. This may take awhile.
Please check /home/pre_migration.090903105452/software_file_existence_check
for possible errors.
Please check /home/pre_migration.090903105452/software_checksum_verification
for possible errors.
Please check /home/pre_migration.090903105452/tcbck.output for possible errors.
All saved information can be found in: /home/pre_migration.090903105452
It is recommended that you create a bootable system backup of your system
before migrating.
I always take a copy of the /etc/sendmail.cf and /etc/motd files before an AIX migration. These files will be replaced during the migration and you
will need to edit them again and add your modifications.
Commit any applied filesets. You should also consider removing any ifixes that may hinder the migration.
If rootvg is mirrored, I break the mirror and reduce it to a single disk. This gives me a spare disk that can be used for the migration.
To allow nimadm to do its job, I must temporarily enable rshd on the client LPAR. I will disable it again after the migration.
# chsubserver -a -v shell -p tcp6 -r inetd
# refresh s inetd
# cd /
# rm .rhosts
# vi .rhosts
+
# chmod 600 .rhosts
On the NIM master, I can now 'rsh' to the client and run a command as root.
# rsh aix1 whoami
root
At this point I'm ready to migrate. The process will take around 30-45 minutes; all the while the applications on the LPAR will continue to function as
normal.
On the NIM master, I have created a new volume group (VG) named nimadmvg. This VG has enough capacity to cater for a full copy of the NIM
clients root volume group (rootvg). This VG will be empty until the migration is started.
Likewise, on the NIM client, I have a spare disk which has enough capacity for a full copy of its rootvg.
On the master (nim1):
# lsvg -l nimadmvg
nimadmvg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
On the client (aix1):
# lspv
hdisk0 0000273ac30fdcfc rootvg active
hdisk1 000273ac30fdd6e None
The fileset bos.alt_disk_install.rte fileset is installed on the NIM master:
# lslpp -l bos.alt_disk_install.rte
Fileset Level State Description
----------------------------------------------------------------------------
Path: /usr/lib/objrepos
bos.alt_disk_install.rte 6.1.3.1 APPLIED Alternate Disk Installation
Runtime
And it is also installed in the AIX 6.1 TL3 SP1 SPOT:
# nim -o showres 'spotaix61031' | grep bos.alt_disk_install.rte
bos.alt_disk_install.rte 6.1.3.1 C F Alternate Disk Installation
The nimadm command is executed from the NIM master.
# nimadm -j nimadmvg -c aix1 -s spotaix61031 -l lppsourceaix61031 -d "hdisk1" Y
Where:
j flag specifies the VG on the master which will be used for the migration
-c is the client name
s is the SPOT name
-l is the lpp_source name
-d is the hdisk name for the alternate root volume group (altinst_rootvg)
Y agrees to the software license agreements for software that will be installed during the migration.
Now I can sit back and watch the migration take place. All migration activity is logged on the NIM master in the /var/adm/ras/alt_mig directory. For
this migration, the log file name is aix1_alt_mig.log. Here's a sample of some of the output you can expect to see for each phase:
MASTER DATE: Mon Nov 9 14:29:09 EETDT 2009
CLIENT DATE: Mon Nov 9 14:29:09 EETDT 2009
NIMADM PARAMETERS: -j nimadmvg -c aix1 -s spotaix61031 -l lppsourceaix61031 -d hdisk1 -Y
Starting Alternate Disk Migration.
+----------------------------------------------------------------------+
Executing nimadm phase 1.
+----------------------------------------------------------------------+
Cloning altinst_rootvg on client, Phase 1.
Client alt_disk_install command: alt_disk_copy -j -i /ALT_MIG_IMD -M 6.1 -P1 -d "hdisk1"
Checking disk sizes.
Creating cloned rootvg volume group and associated logical volumes.
Creating logical volume alt_hd5.
Creating logical volume alt_hd6.
Creating logical volume alt_hd8.
Creating logical volume alt_hd4.
Creating logical volume alt_hd2.
Creating logical volume alt_hd9var.
Creating logical volume alt_hd3.
Creating logical volume alt_hd1.
Creating logical volume alt_hd10opt.
Creating logical volume alt_hd7.
Creating logical volume alt_local_lv.
Creating logical volume alt_varloglv.
Creating logical volume alt_nmonlv.
Creating logical volume alt_chksyslv.
Creating logical volume alt_hd71.
Creating logical volume alt_auditlv.
Creating logical volume alt_nsrlv.
Creating logical volume alt_hd11admin.
Creating /alt_inst/ file system.
Creating /alt_inst/admin file system.
Creating /alt_inst/home file system.
Creating /alt_inst/home/nmon file system.
Creating /alt_inst/nsr file system.
Creating /alt_inst/opt file system.
Creating /alt_inst/tmp file system.
Creating /alt_inst/usr file system.
Creating /alt_inst/usr/local file system.
Creating /alt_inst/usr/local/chksys file system.
Creating /alt_inst/var file system.
Creating /alt_inst/var/log file system.
Creating /alt_inst/var/log/audit file system.
Generating a list of files
for backup and restore into the alternate file system...
Phase 1 complete.
+----------------------------------------------------------------------+
Executing nimadm phase 2.
+----------------------------------------------------------------------+
Creating nimadm cache file systems on volume group nimadmvg.
Checking for initial required migration space.
Creating cache file system /aix1_alt/alt_inst
Creating cache file system /aix1_alt/alt_inst/admin
Creating cache file system /aix1_alt/alt_inst/home
Creating cache file system /aix1_alt/alt_inst/home/nmon
Creating cache file system /aix1_alt/alt_inst/nsr
Creating cache file system /aix1_alt/alt_inst/opt
Creating cache file system /aix1_alt/alt_inst/tmp
Creating cache file system /aix1_alt/alt_inst/usr
Creating cache file system /aix1_alt/alt_inst/usr/local
Creating cache file system /aix1_alt/alt_inst/usr/local/chksys
Creating cache file system /aix1_alt/alt_inst/var
Creating cache file system /aix1_alt/alt_inst/var/log
Creating cache file system /aix1_alt/alt_inst/var/log/audit
+----------------------------------------------------------------------+
Executing nimadm phase 3.
+----------------------------------------------------------------------+
Syncing client data to cache ...
+----------------------------------------------------------------------+
Executing nimadm phase 4.
+----------------------------------------------------------------------+
nimadm: There is no user customization script specified for this phase.
+----------------------------------------------------------------------+
Executing nimadm phase 5.
+----------------------------------------------------------------------+
Saving system configuration files.
Checking for initial required migration space.
Setting up for base operating system restore.
/aix1_alt/alt_inst
Restoring base operating system.
Merging system configuration files.
Running migration merge method: ODM_merge Config_Rules.
Running migration merge method: ODM_merge SRCextmeth.
Running migration merge method: ODM_merge SRCsubsys.
Running migration merge method: ODM_merge SWservAt.
Running migration merge method: ODM_merge pse.conf.
Running migration merge method: ODM_merge vfs.
Running migration merge method: ODM_merge xtiso.conf.
Running migration merge method: ODM_merge PdAtXtd.
Running migration merge method: ODM_merge PdDv.
Running migration merge method: convert_errnotify.
Running migration merge method: passwd_mig.
Running migration merge method: login_mig.
Running migration merge method: user_mrg.
Running migration merge method: secur_mig.
Running migration merge method: RoleMerge.
Running migration merge method: methods_mig.
Running migration merge method: mkusr_mig.
Running migration merge method: group_mig.
Running migration merge method: ldapcfg_mig.
Running migration merge method: ldapmap_mig.
Running migration merge method: convert_errlog.
Running migration merge method: ODM_merge GAI.
Running migration merge method: ODM_merge PdAt.
Running migration merge method: merge_smit_db.
Running migration merge method: ODM_merge fix.
Running migration merge method: merge_swvpds.
Running migration merge method: SysckMerge.
+----------------------------------------------------------------------+
Executing nimadm phase 6.
+----------------------------------------------------------------------+
Installing and migrating software.
Updating install utilities.
+----------------------------------------------------------------------+
Pre-installation Verification...
+----------------------------------------------------------------------+
Verifying selections...done
Verifying requisites...done
Results...
…output truncated….
install_all_updates: Generating list of updatable rpm packages.
install_all_updates: No updatable rpm packages found.
install_all_updates: Checking for recommended maintenance level 6100-03.
install_all_updates: Executing /usr/bin/oslevel -rf, Result = 6100-03
install_all_updates: Verification completed.
install_all_updates: Log file is /var/adm/ras/install_all_updates.log
install_all_updates: Result = SUCCESS
Restoring device ODM database.
+----------------------------------------------------------------------+
Executing nimadm phase 7.
+----------------------------------------------------------------------+
nimadm: There is no user customization script specified for this phase.
+----------------------------------------------------------------------+
Executing nimadm phase 8.
+----------------------------------------------------------------------+
Creating client boot image.
bosboot: Boot image is 40952 512 byte blocks.
Writing boot image to client's alternate boot disk hdisk1.
+----------------------------------------------------------------------+
Executing nimadm phase 9.
+----------------------------------------------------------------------+
Adjusting client file system sizes ...
Adjusting size for /
Adjusting size for /admin
Adjusting size for /home
Adjusting size for /home/nmon
Adjusting size for /nsr
Adjusting size for /opt
Adjusting size for /tmp
Adjusting size for /usr
Adjusting size for /usr/local
Adjusting size for /usr/local/chksys
Adjusting size for /var
Adjusting size for /var/log
Adjusting size for /var/log/audit
Syncing cache data to client ...
+----------------------------------------------------------------------+
Executing nimadm phase 10.
+----------------------------------------------------------------------+
Unmounting client mounts on the NIM master.
forced unmount of /aix1_alt/alt_inst/var/log/audit
forced unmount of /aix1_alt/alt_inst/var/log
forced unmount of /aix1_alt/alt_inst/var
forced unmount of /aix1_alt/alt_inst/usr/local/chksys
forced unmount of /aix1_alt/alt_inst/usr/local
forced unmount of /aix1_alt/alt_inst/usr
forced unmount of /aix1_alt/alt_inst/tmp
forced unmount of /aix1_alt/alt_inst/opt
forced unmount of /aix1_alt/alt_inst/nsr
forced unmount of /aix1_alt/alt_inst/home/nmon
forced unmount of /aix1_alt/alt_inst/home
forced unmount of /aix1_alt/alt_inst/admin
forced unmount of /aix1_alt/alt_inst
Removing nimadm cache file systems.
Removing cache file system /aix1_alt/alt_inst
Removing cache file system /aix1_alt/alt_inst/admin
Removing cache file system /aix1_alt/alt_inst/home
Removing cache file system /aix1_alt/alt_inst/home/nmon
Removing cache file system /aix1_alt/alt_inst/nsr
Removing cache file system /aix1_alt/alt_inst/opt
Removing cache file system /aix1_alt/alt_inst/tmp
Removing cache file system /aix1_alt/alt_inst/usr
Removing cache file system /aix1_alt/alt_inst/usr/local
Removing cache file system /aix1_alt/alt_inst/usr/local/chksys
Removing cache file system /aix1_alt/alt_inst/var
Removing cache file system /aix1_alt/alt_inst/var/log
Removing cache file system /aix1_alt/alt_inst/var/log/audit
+----------------------------------------------------------------------+
Executing nimadm phase 11.
+----------------------------------------------------------------------+
Cloning altinst_rootvg on client, Phase 3.
Client alt_disk_install command: alt_disk_copy -j -i /ALT_MIG_IMD -M 6.1 -P3 -d "hdisk1"
## Phase 3 ###################
Verifying altinst_rootvg...
Modifying ODM on cloned disk.
forced unmount of /alt_inst/var/log/audit
forced unmount of /alt_inst/var/log
forced unmount of /alt_inst/var
forced unmount of /alt_inst/usr/local/chksys
forced unmount of /alt_inst/usr/local
forced unmount of /alt_inst/usr
forced unmount of /alt_inst/tmp
forced unmount of /alt_inst/opt
forced unmount of /alt_inst/nsr
forced unmount of /alt_inst/home/nmon
forced unmount of /alt_inst/home
forced unmount of /alt_inst/admin
forced unmount of /alt_inst
Changing logical volume names in volume group descriptor area.
Fixing LV control blocks...
Fixing file system superblocks...
Bootlist is set to the boot disk: hdisk1 blv=hd5
+----------------------------------------------------------------------+
Executing nimadm phase 12.
+----------------------------------------------------------------------+
Cleaning up alt_disk_migration on the NIM master.
Cleaning up alt_disk_migration on client aix1.
After the migration is complete, I confirm that the bootlist is set to the nst_rootvg disk.
# lspv | grep rootvg
hdisk0 0000273ac30fdcfc rootvg active
hdisk1 000273ac30fdd6e altinst_rootvg active
# bootlist -m normal -o
hdisk1 blv=hd5
At an agreed time, I reboot the LPAR and confirm that the system is now running AIX 6.1.
# shutdown Fr
; system reboots here…
# oslevel s
6100-03-01-0921
# instfix -i | grep AIX
All filesets for 6.1.0.0_AIX_ML were found.
All filesets for 6100-00_AIX_ML were found.
All filesets for 6100-01_AIX_ML were found.
All filesets for 6100-02_AIX_ML were found.
All filesets for 6100-03_AIX_ML were found.
At this point, I would perform some general AIX system health checks to ensure that the system is configured and running as I'd expect. There is
also a post_migration script that you can run to verify the migration. You can find this script in /usr/lpp/bos, after the migration.
You may want to consider upgrading other software such as openssl, openssh, lsof, etc at this stage.
The rsh daemon can now be disabled after the migration.
# chsubserver -d -v shell -p tcp6 -r inetd
# refresh s inetd
# cd /
# rm .rhosts
# ln -s /dev/null .rhosts
With the migration finished, the applications are started and the application support team verify that everything is functioning as expected. I also
take a mksysb and document the system configuration after the migration.
Once we are all satisfied that the migration has completed successfully, we then return rootvg to a mirrored disk configuration.
# lspv | grep old_rootvg
hdisk0 000071da26fe3bd0 old_rootvg
# alt_rootvg_op -X old_rootvg
# extendvg f rootvg hdisk0
# mirrorvg rootvg hdisk0
# bosboot -a -d /dev/hdisk0
# bosboot -a -d /dev/hdisk1
# bootlist -m normal hdisk0 hdisk1
# bootlist -m normal -o
hdisk0 blv=hd5
hdisk1 blv=hd5
If there was an issue with the migration, I could easily back out to the previous release of AIX. Instead of re-mirroring rootvg (above), we would
change the boot list to point at the previous rootvg disk (old_rootvg) and reboot the LPAR.
# lspv | grep old_rootvg
hdisk0 000071da26fe3bd0 old_rootvg
# bootlist -m normal hdisk0
# bootlist -m normal o
hdisk0 blv=hd5
# shutdown Fr
This is much simpler and faster than restoring a mksysb image (via NIM, tape, or DVD), as you would with a conventional migration method.
Install/update Software from NIM Server
Install and Update Software
Move cursor to desired item and press Enter.
Install the Base Operating System on Standalone Clients
Install Software
Update Installed Software to Latest Level (Update All)
Install Software Bundle
Update Software by Fix (APAR)
Install and Update from ALL Available Software
Install Linux on a Standalone Client or Machine Group
Perform a Network Install
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
Target Name admsrv2
Source for BOS Runtime Files rte +
installp Flags [-agX]
Fileset Names []
Remain NIM client after install? yes +
Initiate Boot Operation on Client? yes +
Set Boot List if Boot not Initiated on Client? no +
Force Unattended Installation Enablement? no +
ACCEPT new license agreements? [yes] +
Manage Network Install Resource Allocation
Mo+--------------------------------------------------------------------------+
| Available Network Install Resources |
| |
| Move cursor to desired item and press Esc+7. |
| ONE OR MORE items can be selected. |
| Press Enter AFTER making all selections. |
| |
| [MORE...38] |
| openssh_server installp_bundle |
| vac-aix50 installp_bundle |
| vacpp-aix50 installp_bundle |
| wsm_remote installp_bundle |
| bid_ow bosinst_data |
| > hacmp_source lpp_source |
| > lpp_souceAll lpp_source |
| spotAll spot |
| [BOTTOM] |
| |
| F1=Help F2=Refresh F3=Cancel |
| Esc+7=Select Esc+8=Image Esc+0=Exit |
F1| Enter=Do /=Find n=Find Next |
Es+--------------------------------------------------------------------------+
COMMAND STATUS
Command: failed stdout: yes stderr: no
Before command completion, additional instructions may appear below.
[MORE...57]
of the selected filesets listed above. They are not currently installed
and could not be found on the installation media.
bos.adt.syscalls 5.3.7.0 # Base Level Fileset
bos.data 5.1.0.0 # Base Level Fileset
bos.data 5.3.0.0 # Base Level Fileset
bos.net.nfs.server 5.3.7.0 # Base Level Fileset
rsct.basic.rte 2.5.5.0 # Base Level Fileset
GROUP REQUISITES: The dependencies of one or more of the selected filesets
listed above are defined by a group requisite. A group requisite must pass
a specified number of requisite tests. The following describe group
[MORE...266]
Verify an Optional Program Product
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[TOP] [Entry Fields]
* Installation Target admsrv2
* LPP_SOURCE lpp_souceAll
* Software to Install [bos.adt > +
Customization SCRIPT to run after installation [] +
(not applicable to SPOTs)
Force yes +
installp Flags
PREVIEW only? [no] +
COMMIT software updates? [yes] +
SAVE replaced files? [no] +
[MORE...18]
Verify an Optional Program Product
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[TOP] [Entry Fields]
* Installation Target admsrv2
* LPP_SOURCE hacmp_source
* Software to Install [cluster.adt.es > +
Customization SCRIPT to run after installation [] +
(not applicable to SPOTs)
Force yes +
installp Flags
PREVIEW only? [no] +
COMMIT software updates? [yes] +
SAVE replaced files? [no] +
[MORE...18]
COMMAND STATUS
Command: failed stdout: yes stderr: no
Before command completion, additional instructions may appear below.
[MORE...69]
requisite failures for filesets that you selected. (See the "Requisite
Failure Key" below for details of group member failures.)
At least 1 of the following:
| At least 2 of the following:
| | * rsct.compat.clients.hacmp 2.5.4.0
| | * rsct.compat.basic.hacmp 2.5.4.0
| At least 2 of the following:
| | ~ rsct.compat.basic.hacmp 2.4.12.0
| | * bos.rte v=5, r<4
What you can do on HMC with Web-based System Manager
What you can do when Server on “operating” Status
TIPS: What you can do when Server is Power-off
TIPS: Server Properties on Operating status:
(If Server is power-off, you can not find “Processor/IO/Memory” tabs, as following:)
Partion Properties:
Hardware Management using HMC on firmware level, which you cannot do it on AIX level
What you can do now:
For Example, Launch ASM Menu
User ID: admin; Passwd: admin
See, You can do lots of things here, operations including reset system to factory configuration,
deconfigure CPU/Memory, View/Clear System even/error logs, which you can never do under other
environment, totally controlled by Service Processor(s) and firmware on system board, even the
system is under Power-Off status:
System Plans Management
View a system plan( user ID: hscroot; Passwd: abc5678)
HMC User Management
HMC Code Management
Useful Documents here
What is vi?
The default editor that comes with the UNIX operating system is called vi (visual editor).
[Alternate editors for UNIX environments include pico and emacs, a product of GNU.]
The UNIX vi editor is a full screen editor and has two modes of operation:
1. Command mode commands which cause action to be taken on the file, and
2. Insert mode in which entered text is inserted into the file.
In the command mode, every character typed is a command that does something to the text
file being edited; a character typed in the command mode may even cause the vi editor to
enter the insert mode. In the insert mode, every character typed is added to the text in the file;
pressing the <Esc> (Escape) key turns off the Insert mode.
While there are a number of vi commands, just a handful of these is usually sufficient for
beginning vi users. To assist such users, this Web page contains a sampling of basic vi
commands. The most basic and useful commands are marked with an asterisk (* or star) in
the tables below. With practice, these commands should become automatic.
NOTE: Both UNIX and vi are case-sensitive. Be sure not to use a capital letter in place of a
lowercase letter; the results will not be what you expect.
To Get Into and Out Of vi
To Start vi
To use vi on a file, type in vi filename. If the file named filename exists, then the first
page (or screen) of the file will be displayed; if the file does not exist, then an empty file and
screen are created into which you may enter text.
*
vi filename
edit filename starting at line 1
vi -r filename
recover filename that was being edited when system crashed
To Exit vi
Usually the new or modified file is saved when you leave vi. However, it is also possible to
quit vi without saving the file.
Note: The cursor moves to bottom of screen whenever a colon (:) is typed. This type of
command is completed by hitting the <Return> (or <Enter>) key.
*
:x<Return>
quit vi, writing out modified file to file named in original invocation
:wq<Return>
quit vi, writing out modified file to file named in original invocation
:q<Return>
quit (or exit) vi
*
:q!<Return>
quit vi even though latest changes have not been saved for this vi call
Moving the Cursor
Unlike many of the PC and MacIntosh editors, the mouse does not move the cursor within
the vi editor screen (or window). You must use the the key commands listed below. On some
UNIX platforms, the arrow keys may be used as well; however, since vi was designed with
the Qwerty keyboard (containing no arrow keys) in mind, the arrow keys sometimes produce
strange effects in vi and should be avoided.
If you go back and forth between a PC environment and a UNIX environment, you may find
that this dissimilarity in methods for cursor movement is the most frustrating difference
between the two.
In the table below, the symbol ^ before a letter means that the <Ctrl> key should be held
down while the letter key is pressed.
:set nu
display line number before each line
*
j or <Return>
[or down-arrow]
move cursor down one line
*
k [or up-arrow]
move cursor up one line
*
h or <Backspace>
[or left-arrow]
move cursor left one character
*
l or <Space>
[or right-arrow]
move cursor right one character
*
0 (zero)
move cursor to start of current line (the one with the cursor)
*
$
move cursor to end of current line
w
move cursor to beginning of next word
b
move cursor back to beginning of preceding word
:0<Return> or 1G
move cursor to first line in file
:n<Return> or nG
move cursor to line n
:$<Return> or G
move cursor to last line in file
Screen Manipulation
The following commands allow the vi editor screen (or window) to move up or down several
lines and to be refreshed.
^f
move forward one screen
^b
move backward one screen
^d
move down (forward) one half screen
^u
move up (back) one half screen
^l
redraws the screen
^r
redraws the screen, removing deleted lines
Adding, Changing, and Deleting Text
Unlike PC editors, you cannot replace or delete text by highlighting it with the mouse. Instead
use the commands in the following tables.
Perhaps the most important command is the one that allows you to back up and undo your last
action. Unfortunately, this command acts like a toggle, undoing and redoing your most recent
action. You cannot go back more than one step.
*
u
UNDO WHATEVER YOU JUST DID; a simple toggle
The main purpose of an editor is to create, add, or modify text for a file.
Inserting or Adding Text
The following commands allow you to insert and add text. Each of these commands puts the
vi editor into insert mode; thus, the <Esc> key must be pressed to terminate the entry of text
and to put the vi editor back into command mode.
*
i
insert text before cursor, until <Esc> hit
I
insert text at beginning of current line, until <Esc> hit
*
a
append text after cursor, until <Esc> hit
A
append text to end of current line, until <Esc> hit
*
o
open and put text in a new line below current line, until <Esc> hit
*
O
open and put text in a new line above current line, until <Esc> hit
Changing Text
The following commands allow you to modify text.
*
r
replace single character under cursor (no <Esc> needed)
R
replace characters, starting with current cursor position, until <Esc> hit
cw
change the current word with new text,
starting with the character under cursor, until <Esc> hit
cNw
change N words beginning with character under cursor, until <Esc> hit;
e.g., c5w changes 5 words
C
change (replace) the characters in the current line, until <Esc> hit
cc
change (replace) the entire current line, stopping when <Esc> is hit
Ncc or cNc
change (replace) the next N lines, starting with the current line,
stopping when <Esc> is hit
Deleting Text
The following commands allow you to delete text.
*
x
delete single character under cursor
Nx
delete N characters, starting with character under cursor
dw
delete the single word beginning with character under cursor
dNw
delete N words beginning with character under cursor;
e.g., d5w deletes 5 words
D
delete the remainder of the line, starting with current cursor position
*
dd
delete entire current line
Ndd or dNd
delete N lines, beginning with the current line;
e.g., 5dd deletes 5 lines
Cutting and Pasting Text
The following commands allow you to copy and paste text.
yy
copy (yank, cut) the current line into the buffer
Nyy or yNy
copy (yank, cut) the next N lines, including the current line, into the buffer
p
put (paste) the line(s) in the buffer into the text after the current line
Other Commands
Searching Text
A common occurrence in text editing is to replace one word or phase by another. To locate
instances of particular sets of characters (or strings), use the following commands.
/string
search forward for occurrence of string in text
?string
search backward for occurrence of string in text
n
move to next occurrence of search string
N
move to next occurrence of search string in opposite direction
Determining Line Numbers
Being able to determine the line number of the current line or the total number of lines in the
file being edited is sometimes useful.
:.=
returns line number of current line at bottom of screen
:=
returns the total number of lines at bottom of screen
^g
provides the current line number, along with the total number of lines,
in the file at the bottom of the screen
Saving and Reading Files
These commands permit you to input and output files other than the named file with which you are
currently working.
:r filename<Return>
read file named filename and insert after current line
(the line with cursor)
:w<Return>
write current contents to file named in original vi call
:w newfile<Return>
write current contents to a new file named newfile
:12,35w smallfile<Return>
write the contents of the lines numbered 12 through 35 to a new file
named smallfile
:w! prevfile<Return>
write current contents over a pre-existing file named prevfile
Chunk off-line due to file system full ( not enough disk space allocated to chunk files)
However, oncheck -pr shows those chunks online, it means at Reserved Pages level the chunks are
not marked offline yet but they were marked in memory as offline.
If the instance is in the actual state, the first option is to bounce (onmode -kuy) the instance and start it
up again to check if Fast Recovery completes successfully.
if 'onmode -kuy' does not shutdown the instance (it is is the expected behavior because it can not write
a checkpoint) then we need to run 'onstat -g glo' and kill the first CPU VP (kill -9 to the process id of
first VP in the instance), ipcs and ipcrm to find and remove and share memory and singnal if informix
is not shutdown cleanly.
If the instance can not complete the fast recovery after restart we have to analyze the situation and the
possible solution is truncate fast recovery but we need that you sign the Authorization to truncate Fast
recovery before we can proceed.
Regards,
Exploring the Sysmaster Database
by Lester Knutsen
lester@advancedatatools.com
When you list all the databases on your INFORMIX server, you will see one called "sysmaster". This
is a special database and is one of the new features that first appeared in INFORMIX-OnLine DSA
6.x and 7.x. This is a database that contains tables that can be used for monitoring your system. These
are referred to as the System Monitoring Interface (SMI) tables. In this chapter we will explore some
of the tables and views that are in this database.
The sysmaster database is described as a pseudo database. That means most of its tables are not
normal tables on disk, but pointers to shared memory structures in the OnLine engine. The sysmaster
database contains over 120 tables. Only 18 of these tables are documented in the INFORMIX-
OnLine Dynamic Server Administrator's Guide, Volume 2, Chapter 38. The rest are undocumented
and described by Informix as for internal use. The examples and references in this article are based
on OnLine 7.23. I have also tested some of the examples with versions 7.10, 7.12, and 7.22. There are
some minor changes between versions in the undocumented features and structures of these tables.
A warning: Some of the features discussed in this article are based on undocumented SMI tables and
may change or not work in future versions of INFORMIX OnLine DSA.
This article will focus on users, server configuration, dbspaces, chunks, tables, and monitoring IO
using the sysmaster database. We will present how to create scripts to monitor the following:
List who is using each database.
Display information about your server configuration.
Display how much free space is available in each dbspace in a format like the Unix df
command.
List the status and characteristics of each chunk device.
Display blocks of free space within a chunk. This allows you to plan where to put large tables
without fragmenting them.
Display IO statistics by chunk devices.
Display IO usage of chunk devices as a percent of the total IO, and show which chunks are
getting used the most.
Display tables and the number of extents, and number of pages used.
Present a layout of dbspace, databases, tables, and extents similar to the command "tbcheck -
pe".
Show table usage statistics sorted by which tables have the most reads, writes, or locks.
Show statistics of users sessions.
Show locks and users who are waiting on locks.
1. A Practical Example - Who is Using What Database
Let's begin with a very practical example of the sysmaster database's value.
My interest in this database started a couple of years ago, while consulting on a project for a
development group where I needed to know who had a database open and which workstation they
were using to connect to the database. This was a development environment and there were continual
changes to the database schemas. In order to make updates to the database schema, I would have to
get the developers to disconnect from the database. The "onstat -u" utility would tell me which users
were connected to the server, but not what database and what workstation they were using. "Onstat -g
ses" told me the user and workstation, but not the database. "Onstat -g sql told me the session id and
database, but not the user name and workstation. After some debugging, I found all the information I
wanted in the sysmaster database. And, because it was a database, I could retrieve it with SQL
queries. The following query shows the database, who has it open, the workstation they are connected
from, and the session id.
Figure 1. Dbwho SQL script
-- dbwho.sql
select sysdatabases.name database, -- Database Name
syssessions.username, -- User Name
syssessions.hostname, -- Workstation
syslocks.owner sid -- Informix Session ID
from syslocks, sysdatabases , outer syssessions
where syslocks.tabname = "sysdatabases" -- Find locks on sysdatabases
and syslocks.rowidlk = sysdatabases.rowid -- Join rowid to database
and syslocks.owner = syssessions.sid -- Session ID to get user info
order by 1;
Every user that opens a database opens a shared lock on the row in the sysdatabases table of the
sysmaster database that points to that database. First we need to find all the locks in syslocks on the
sysdatabases table. This gives us the rowid in sysdatabase which has the database name. Finally, we
join with the table syssessions to get the username and hostname. I put all this together in a shell
script that can be run from the unix prompt and called it dbwho. Figure 2 contains the shell script.
Figure 2. Dbwho shell script
:
###########################################################################
# Program: dbwho
# Author: Lester Knutsen
# Date: 10/28/1995
# Description: List database, user and workstation of all db users
###########################################################################
echo "Generating list of users by database ..."
dbaccess sysmaster - <<EOF
select
sysdatabases.name database,
syssessions.username,
syssessions.hostname,
syslocks.owner sid
from syslocks, sysdatabases , outer syssessions
where syslocks.rowidlk = sysdatabases.rowid
and syslocks.tabname = "sysdatabases"
and syslocks.owner = syssessions.sid
order by 1;
EOF
One of the first things you will notice is that this script is slow. This led me to start digging into what
was causing the slow performance. Running this query with set explain turned on (this shows the
query optimizer plan) shows that there is a lot of work going on behind the scenes. Syslocks is a
view, and it takes a sequential scan of six tables to produce the view. A temp table is created to hold
the results of the syslocks view, and this is then joined with the other two tables. The tables
sysdatabase and syssessions are also views. And the view syssessions uses a stored procedure, called
bitval. Figure 3 contains the output from turning set explain on. In spite of these queries sometimes
being a bit slow, these tables are a tremendous value and make it much easier to monitor your
database server.
Figure 3: Output from "set explain on" for dbwho.sql
QUERY:
------
create view "informix".syslocks
(dbsname,tabname,rowidlk,keynum,type,owner,waiter)
as select x1.dbsname ,x1.tabname ,x0.rowidr ,x0.keynum ,
x4.txt [1,4] ,x3.sid ,x5.sid
from "informix".syslcktab x0 ,
"informix".systabnames x1 ,
"informix".systxptab x2 ,
"informix".sysrstcb x3 ,
"informix".flags_text x4 ,
outer("informix".sysrstcb x5 )
where ((((((x0.partnum = x1.partnum )
AND (x0.owner = x2.address ) )
AND (x2.owner = x3.address ) )
AND (x0.wtlist = x5.address ) )
AND (x4.tabname = 'syslcktab' ) )
AND (x4.flags = x0.type ) ) ;
Estimated Cost: 713
Estimated # of Rows Returned: 51
1) informix.syslcktab: SEQUENTIAL SCAN
2) informix.flags_text: SEQUENTIAL SCAN
Filters: informix.flags_text.tabname = 'syslcktab'
DYNAMIC HASH JOIN
Dynamic Hash Filters: informix.syslcktab.type = informix.flags_text.flags
3) informix.systxptab: SEQUENTIAL SCAN
DYNAMIC HASH JOIN
Dynamic Hash Filters: informix.syslcktab.owner =
informix.systxptab.address
4) informix.systabnames: SEQUENTIAL SCAN
Filters: informix.systabnames.tabname = 'sysdatabases'
DYNAMIC HASH JOIN
Dynamic Hash Filters: informix.syslcktab.partnum
informix.systabnames.partnum
5) informix.sysrstcb: SEQUENTIAL SCAN
DYNAMIC HASH JOIN (Build Outer)
Dynamic Hash Filters: informix.systxptab.owner = informix.sysrstcb.address
6) informix.sysrstcb: SEQUENTIAL SCAN
DYNAMIC HASH JOIN
Dynamic Hash Filters: informix.syslcktab.wtlist =
informix.sysrstcb.address
QUERY:
------
select sysdatabases.name database,
syssessions.username,
syssessions.hostname,
syslocks.owner sid
from syslocks, sysdatabases, outer syssessions
where syslocks.rowidlk = sysdatabases.rowid
and syslocks.tabname = "sysdatabases"
and syslocks.owner = syssessions.sid
order by 1
Estimated Cost: 114
Estimated # of Rows Returned: 11
Temporary Files Required For: Order By
1) (Temp Table For View): SEQUENTIAL SCAN
2) informix.sysdbspartn: INDEX PATH
(1) Index Keys: ROWID
Lower Index Filter: informix.sysdbspartn.ROWID = (Temp Table For
View).rowidlk
3) informix.sysscblst: INDEX PATH
(1) Index Keys: sid (desc)
Lower Index Filter: informix.sysscblst.sid = (Temp Table For
View).owner
4) informix.sysrstcb: AUTOINDEX PATH
Filters: informix.bitval(informix.sysrstcb.flags ,'0x80000' )= 1
(1) Index Keys: scb
Lower Index Filter: informix.sysrstcb.scb = informix.sysscblst.address
2. How the Sysmaster Database is Created
The sysmaster database keeps track of information about the database server just like the system
tables keep track of information in each database. This database is automatically created when you
initialize OnLine. It includes tables for tracking two types of information: the System Monitoring
Interface (SMI) tables, and the On-Archive catalog tables. This article will focus on the SMI tables.
There is a warning in the documentation not to change any information in these tables as it may
corrupt your database server. Also there is a warning that OnLine does not lock these tables, and that
all selects from this database will use an isolation level of dirty read. This means that the data can
change dynamically as you are retrieving it. This also means that selecting data from the sysmaster
tables does not lock any of your users from processing their data. As mentioned above, the SMI tables
are described as pseudo-tables which point directly to the shared memory structures in OnLine where
the data is stored. That means they are not actually on disk. However, because many of the SMI
tables are really views, selecting from them does create temporary tables and generate disk activity.
A script located in your directory $INFORMIXDIR/etc. named sysmaster.sql contains the SQL
statements to create the sysmaster database. The process of creating it is interesting and outlined as
follows:
First the script creates real tables with the structures of the pseudo tables.
Then, the table structures of the real tables are copied to temp tables.
The real tables are then dropped.
The column in systables that contains partnum is updated to indicate they point to pseudo
tables in shared memory.
The flags_text table is created which has the interpretations for all the text descriptions and
flags used in the SMI tables.
The stored procedures are created that are used to create the views, two of which may be
interesting:
- bitval() is a stored procedure for getting the boolean flag values
- l2date() is a stored procedure for converting unix time() long values to dates
Finally the script creates the SMI views.
After the sysmaster script is run the system will execute another script to create the on-archive
tables and views in the sysmaster database.
Warning: The sysmaster database is created the first time you go into online mode after you first
initialize your system. Do NOT start creating any other database until this process is complete or you
may corrupt your sysmaster database. You will need 2000 KB of logical log space to create the
sysmaster database. If there are problems creating the sysmaster database, shut your OnLine server
down and restart it. This will re-create the sysmaster database. Monitor your online.log file until you
see the messages showing the successful completion of building the sysmaster database in the
online.log (Figure 4).
Figure 4. Online.log messages showing successful creation of sysmaster database
12:10:24 On-Line Mode
12:10:24 Building 'sysmaster' database ...
12:11:02 Logical Log 1 Complete.
12:11:03 Process exited with return code 1: /bin/sh /bin/sh -c
/u3/informix7/log_full.sh 2 23 "Logical Log 1 Complete." "Logical Log 1
Complete."
12:11:22 Logical Log 2 Complete.
12:11:23 Process exited with return code 1: /bin/sh /bin/sh -c
/u3/informix7/log_full.sh 2 23 "Logical Log 2 Complete." "Logical Log 2
Complete."
12:11:26 Checkpoint Completed: duration was 3 seconds.
12:11:40 Logical Log 3 Complete.
12:11:41 Process exited with return code 1: /bin/sh /bin/sh -c
/u3/informix7/log_full.sh 2 23 "Logical Log 3 Complete." "Logical Log 3
Complete."
12:11:59 Logical Log 4 Complete.
12:12:00 Process exited with return code 1: /bin/sh /bin/sh -c
/u3/informix7/log_full.sh 2 23 "Logical Log 4 Complete." "Logical Log 4
Complete."
12:12:25 'sysmaster' database built successfully.
Supported SMI Tables
There are 18 supported SMI tables in release 7.23 of INFORMIX-OnLine DSA. We will discuss the
more important ones and a few unsupported ones in this chapter.
Figure 5. Supported SMI tables
Supported tables and views: (OnLine 7.23)
sysadtinfo Auditing configuration table
sysaudit Auditing event masks table
syschkio Chunk I/O statistics view
syschunks Chunk information view
sysconfig Configuration information view
sysdatabases Database information view
sysdbslocale Locale information view
sysdbspaces Dbspace information view
sysdri Data replication view
sysextents Table extent allocation view
syslocks Current lock information view
syslogs Logical Log status view
sysprofile Current system profile view
sysptptof Current table profile view
syssessions Current user sessions view
sysseswts Session wait times view
systabnames Table information table
sysvpprof Current VP profile view
Differences From Other Databases
There are several key differences between the sysmaster database and other databases you might
create. Reminder that this is a database that points to the server's shared memory structures and not to
tables that are stored on disk. Some of the differences are:
You cannot update the sysmaster database. Its purpose is to allow you to read information
about the server. Trying to update its tables should generate an error message but may corrupt
the server.
You cannot run dbschema on these table to get their structure. This will generate and error
message.
You cannot drop the sysmaster database or any tables within it. Again, this should generate an
error message.
The data is dynamic and may change while you are retrieving it. The sysmaster database has
an effective isolation level of dirty read even though it looks like a database with unbuffered
logging. This prevents your queries from locking users and slowing down their processing.
However, because the sysmaster database uses unbuffered logging, its temp tables are logged.
You can create triggers and stored procedures on the sysmaster database, but the triggers will
never be executed. Again, this is because this is not a real database but pointers to shared
memory.
The sysmaster database reads the same shared memory structures read by the command line utility
"onstat". The statistical data is reset to zero when OnLine is shut down and restarted.
It is also reset to zero when the "onstat -z" command to reset statistics is used. Individual user
statistical data is lost when a user disconnects from the server.
Now, let's examine some of the more interesting tables in the sysmaster database and what else can
be done with them.
3. Server Information
This first section will look at how you determine the state and configuration of your INFORMIX-
OnLine server from the sysmaster database. We will look at four tables and how to use them.
Server configuration and statistics tables:
sysconfig - ONCONFIG File
sysprofile - Server Statistics
syslogs - Logical Logs
sysvpprof - Virtual Processors
Server Configuration Parameters: sysconfig
The view sysonfig contains configuration information from the OnLine server. This information was
read from the ONCONFIG file when the server was started. Have you ever needed to know from
within a program how your server was setup? Or, what TAPEDEV is set to?
View sysconfig
Column Data Type Description
cf_id integer unique numeric identifier
cf_name char(18) config parameter name
cf_flags integer flags, 0 = in view sysconfig
cf_original char(256) value in ONCONFIG at boottime
cf_effective char(256) value effectively in use
cf_default char(256) value by default
Example queries:
To find out what the current tape device is:
select cf_effective from sysconfig where cf_name = "TAPEDEV";
To find the server name:
select cf_effective from sysconfig where cf_name =
"DBSERVERNAME";
To find out if data replication is turned on:
select cf_effective from sysconfig where cf_name = "DRAUTO";
Server Profile Information: sysprofile
The sysprofile table is a view based on values in a table called syshmhdr. Syshmhdr points to the
same shared memory area as the onstat utility with the -p option. When you zero out the statistics
with "onstat -z", all values in the syshmhdr table are reset to zero.
View sysprofile
Column Data Type Description
name char(16) profile element name
value integer current value
One of the best uses of this data is for developing alarms when certain values fall below acceptable
levels. The Informix documentation says that tables in the sysmaster database do not run triggers.
This is because the updates to these tables take place within OnLine shared memory and not through
SQL which activates triggers. However, you can create a program to poll this table at specified
intervals to select data and see if it falls below your expectations.
Logical Logs Information: syslogs
Syslogs is a view based on the table syslogfil. This is an example where the SMI views are a great
tool in presenting the data in a more understandable format. Syslogfil has a field called flags which
contains status information encoded in boolean smallint. The view syslogs decodes that data into six
fields: is_used, is_current, is_backed_up, is_new, is_archived, and is_temp, with a 1 if true or a 0 if
false.
View syslogs
Column Data Type Description
number smallint logfile number
uniqid integer logfile uniqid
size integer pages in logfile
used integer pages used in logfile
is_used integer 1 for used, 0 for free
is_current integer 1 for current
is_backed_up integer 1 for backuped
is_new integer 1 for new
is_archived integer 1 for archived
is_temp integer 1 for temp
flags smallint logfile flags
Virtual Processor Information and Statistics: sysvpprof
Sysvpprof is another view that is more readable than the underlying table sysvplst. As with the view
syslogs in the above paragraph, this view has data that is converted to make it more understandable.
This time the flags are converted to text descriptions from the flags_text table.
View sysvpprof
Column Data Type Description
vpid integer VP id
txt char(50) VP class name
usecs_user float number of unix secs of user time
usecs_sys float number of unix secs of system time
The following query on the base table sysvplst achieves the same results as the view.
Figure 6. SQL script to display VP status
-- vpstat.sql
select vpid,
txt[1,5] class,
pid,
usecs_user,
usecs_sys,
num_ready
from sysvplst a, flags_text b
where a.flags != 6
and a.class = b.flags
and b.tabname = 'sysvplst';
SQL Output
vpid class pid usecs_user usecs_sys num_ready
1 cpu 335 793.61 30.46 0
2 adm 336 0.02 0.11 0
3 lio 337 1.15 5.98 0
4 pio 338 0.19 1.13 0
5 aio 339 0.94 4.27 0
6 msc 340 0.15 0.14 0
7 aio 341 0.81 5.72 0
8 tli 342 1.79 3.02 0
9 aio 343 0.52 2.50 0
10 aio 344 0.28 1.16 0
11 aio 345 0.09 0.86 0
12 aio 346 0.16 0.48 0
4. Dbspace and Chunk Information
Now let's look at the SMI tables that contain information about your disk space, chunks, and dbspace.
There are four tables that contain this data.
sysdbspaces - DB Spaces
syschunks - Chunks
syschkio - I/O by Chunk
syschfree* - Free Space by Chunk
* Note: Syschfree is not a supported table.
Dbspace Configuration: sysdbspaces
The sysmaster database has three key tables containing dbspace and chunk information. The first one
is sysdbspaces. This is a view that interprets the underlying table sysdbstab. Sysdbspaces serves two
purposes: it translates a bit field containing flags into separate columns where 1 equals yes and 0
equals no, and, it allows the underlying table to change between releases without having to change
code. The view is defined as follows:
View sysdbspaces
Column Data Type Description
dbsnum smallint dbspace number,
name char(18) dbspace name,
owner char(8) dbspace owner,
fchunk smallint first chunk in dbspace,
nchunks smallint number of chunks in dbspace,
is_mirrored bitval is dbspace mirrored, 1=Yes, 0=No
is_blobspace bitval is dbspace a blob space, 1=Yes, 2=No
is_temp bitval is dbspace temp, 1=Yes, 2=No
flags smallint dbspace flags
The columns of type bitval are the flags that are extracted from the flags column by a stored
procedure called bitval when the view is generated.
Chunk Configuration: syschunks
The syschunks table is also a view based on two actual tables, one for primary chunk information,
syschktab, and one for mirror chunk information, sysmchktab. The following is the layout of
syschunks:
View syschunks
Column Data Type Description
chknum smallint chunk number
dbsnum smallint dbspace number
nxchknum smallint number of next chunk in dbspace
chksize integer pages in chunk
offset integer pages offset into device
nfree integer free pages in chunk
is_offline bitval is chunk offline, 1=Yes, 0=No
is_recovering bitval is chunk recovering, 1=Yes, 0=No
is_blobchunk bitval is chunk blobchunk, 1=Yes, 0=No
is_inconsistent bitval is chunk inconsistent, 1=Yes, 0=No
flags smallint chunk flags converted by bitval
fname char(128) device pathname
mfname char(128) mirror device pathname
moffset integer pages offset into mirror device
mis_offline bitval is mirror offline, 1=Yes, 0=No
mis_recovering bitval is mirror recovering, 1=Yes, 0=No
mflags smallint mirror chunk flags
Displaying Free Dbspace
Now, we will take a look at several ways to use this dbspace and chunk information. One capability I
have always wanted is a way to show the amount of dbspace used and free in the same format as the
Unix "df -k" command. The sysmaster database contains information about the dbspaces and chunks,
so this can be generated with an SQL script. The following is an SQL script to generate the amount of
free space in a dbspace. It uses the sysdbspaces and syschunks tables to collect its information.
Figure 7. SQL script to display free dbspace
-- dbsfree.sql - display free dbspace like Unix "df -k " command
database sysmaster;
select name[1,8] dbspace, -- name truncated to fit on one line
sum(chksize) Pages_size, -- sum of all chunks size pages
sum(chksize) - sum(nfree) Pages_used,
sum(nfree) Pages_free, -- sum of all chunks free pages
round ((sum(nfree)) / (sum(chksize)) * 100, 2) percent_free
from sysdbspaces d, syschunks c
where d.dbsnum = c.dbsnum
group by 1
order by 1;
Sample output
dbspace pages_size pages_used pages_free percent_free
rootdbs 50000 13521 36479 72.96
dbspace1 100000 87532 12468 12.47
dbspace2 100000 62876 37124 37.12
dbspace3 100000 201 99799 99.80
Displaying Chunk Status
The next script lists the status and characteristics of each chunk device.
Figure 8. SQL script showing chunk status
-- chkstatus.sql - display information about a chunk
database sysmaster;
select
name dbspace, -- dbspace name
is_mirrored, -- dbspace is mirrored 1=Yes 0=No
is_blobspace, -- dbspace is blobspace 1=Yes 0=No
is_temp, -- dbspace is temp 1=Yes 0=No
chknum chunknum, -- chunk number
fname device, -- dev path
offset dev_offset, -- dev offset
is_offline, -- Offline 1=Yes 0=No
is_recovering, -- Recovering 1=Yes 0=No
is_blobchunk, -- Blobspace 1=Yes 0=No
is_inconsistent, -- Inconsistent 1=Yes 0=No
chksize Pages_size, -- chunk size in pages
(chksize - nfree) Pages_used, -- chunk pages used
nfree Pages_free, -- chunk free pages
round ((nfree / chksize) * 100, 2) percent_free, -- free
mfname mirror_device, -- mirror dev path
moffset mirror_offset, -- mirror dev offset
mis_offline , -- mirror offline 1=Yes 0=No
mis_recovering -- mirror recovering 1=Yes 0=No
from sysdbspaces d, syschunks c
where d.dbsnum = c.dbsnum
order by dbspace, chunknum
Displaying Blocks of Free Space in a Chunk: syscchfree
In planning expansions, new databases, or when adding new tables to an existing server, I like to
know what blocks of contiguous free space are available. This allows placing new tables in dbspaces
where they will not be broken up by extents. One of the sysmaster tables tracks the chunk free list,
which is the available space in a chunk.
Table syschfree
Column Data Type Description
chknum integer chunk number
extnum integer extent number in chunk
start integer physical addr of start
leng integer length of extent
The next script uses this table to create a list of free space and the size of each space that is available.
Figure 9. SQL script showing free space on chunks
-- chkflist.sql - display list of free space within a chunk
database sysmaster;
select
name dbspace, -- dbspace name truncated to fit
f.chknum, -- chunk number
f.extnum, -- extent number of free space
f.start, -- starting address of free space
f.leng free_pages -- length of free space
from sysdbspaces d, syschunks c, syschfree f
where d.dbsnum = c.dbsnum
and c.chknum = f.chknum
order by dbspace, chknum
Sample Output
dbspace chknum extnum start free_pages
rootdbs 1 0 11905 1608
rootdbs 1 1 15129 34871
IO Statistics by Chunk Devices: syschkio
Informix uses a view, syschkio, to collect information about the number of disk reads and writes per
chunk. This view is based on the tables syschktab and symchktab.
View syschkio
Column Data Type Description
chunknum smallint chunk number
reads integer number of read ops
pagesread integer number of pages read
writes integer number of write ops
pageswritten integer number of pages written
mreads integer number of mirror read ops
mpagesread integer number of mirror pages read
mwrites integer number of mirror write ops
mpageswritten integer number of mirror pages written
The following script displays IO usage of chunk devices. It uses the base tables so the mirror chunks
can be displayed on separate rows. It also joins with the base table that contains the dbspace name.
Figure 10. SQL script displaying chunk I/O
-- chkio.sql - displays chunk IO status
database sysmaster;
select
name[1,10] dbspace, -- truncated to fit 80 char screen line
chknum,
"Primary" chktype,
reads,
writes,
pagesread,
pageswritten
from syschktab c, sysdbstab d
where c.dbsnum = d.dbsnum
union all
select
name[1,10] dbspace,
chknum,
"Mirror" chktype,
reads,
writes,
pagesread,
pageswritten
from sysmchktab c, sysdbstab d
where c.dbsnum = d.dbsnum
order by 1,2,3;
Sample Output
dbspace chknum chktype reads writes pagesread pageswritten
rootdbs 1 Primary 74209 165064 209177 308004
rootdbs 1 Mirror 69401 159832 209018 307985
A better view of your IO is to see the percent of the total IO that takes place per chunk. This next
query collects IO stats into a temp table, and then uses that to calculate total IO stats for all chunks.
Then each chunk's IO is compared with the total to determine the percent of IO by chunk. The
following script uses the one above as a basis to show IO by chunk as a percent of the total IO.
Figure 11. SQL script chunk I/O summary
-- chkiosum.sql - calculates percent of IO by chunk
database sysmaster;
-- Collect chunk IO stats into temp table A
select
name dbspace,
chknum,
"Primary" chktype,
reads,
writes,
pagesread,
pageswritten
from syschktab c, sysdbstab d
where c.dbsnum = d.dbsnum
union all
select
name[1,10] dbspace,
chknum,
"Mirror" chktype,
reads,
writes,
pagesread,
pageswritten
from sysmchktab c, sysdbstab d
where c.dbsnum = d.dbsnum
into temp A;
-- Collect total IO stats into temp table B
select
sum(reads) total_reads,
sum(writes) total_writes,
sum(pagesread) total_pgreads,
sum(pageswritten) total_pgwrites
from A
into temp B;
-- Report showing each chunks percent of total IO
select
dbspace,
chknum,
chktype,
reads,
writes,
pagesread,
pageswritten,
round((reads/total_reads) *100, 2) percent_reads,
round((writes/total_writes) *100, 2) percent_writes,
round((pagesread/total_pgreads) *100, 2) percent_pg_reads,
round((pageswritten/total_pgwrites) *100, 2) percent_pg_writes
from A, B
order by 11;-- order by percent page writes
Sample output for 1 chunk
dbspace datadbs
chknum 9
chktype Primary
reads 12001
writes 9804
pagesread 23894
pageswritten 14584
percent_reads 0.33
percent_writes 0.75
percent_pg_reads 37.59
percent_pg_writes 1.86
5. Database and Table Information
The next five tables we will look at store information on your tables and extents. They are:
sysdatabases - Databases
systabnames - Tables
sysextents - Tables extents
sysptprof - Tables I/O
Information on All Databases on a Server: sysdatabases
This view has data on all databases on a server. Have you ever needed to create a pop-up list of
databases within a program? This table now allows programs to give users a list of databases to select
from without resorting to ESQL/C. The following is the definition of this view:
View sysdatabases
Column Data Type Description
name char(18) database name
partnum integer table id for systables
owner char(8) user name of creator
created integer date created
is_logging bitval unbuffered logging, 1=Yes, 0= No
is_buff_log bitval buffered logging, 1=Yes, 0= No
is_ansi bitval ANSI mode database, 1=Yes, 0= No
is_nls bitval NLS support, 1=Yes, 0= No
flags smallint flags indicating logging
The following is a script to list all databases, owners, dbspaces, and logging status. Notice the
function dbinfo is used. This is a new function in 7.X with several uses, one of which is to convert
the partnum of a database into its corresponding dbspace. This function will be used in several
examples that follow.
Figure 12. SQL script listing all databases on the server
-- dblist.sql - List all databases, owner and logging status
database sysmaster;
select
dbinfo("DBSPACE",partnum) dbspace,
name database,
owner,
is_logging,
is_buff_log
from sysdatabases
order by dbspace, name;
Sample Output
dbspace database owner is_logging is_buff_log
rootdbs central lester 0 0
rootdbs datatools lester 0 0
rootdbs dba lester 0 0
rootdbs roster lester 0 0
rootdbs stores7 lester 0 0
rootdbs sunset linda 0 0
rootdbs sysmaster informix 1 0
rootdbs zip lester 1 1
Information About Database Tables: systabnames, sysextents, and sysptprof
Three tables contain all the data you need from the sysmaster database about tables in your database.
The first of these is a real table defined as follows:
Table systabnames - All tables on the server
Column Data Type Description
partnum integer table id for table
dbsname char(18) database name
owner char(8) table owner
tabname char(18) table name
collate char(32) collation assoc with NLS DB
View sysextents - Tables and each extent on the server
Column Data Type Description
dbsname char(18) database name
tabname char(18) table name
start integer physical addr for this extent
size integer size of this extent
The view sysextents is based on a table, sysptnext, defined as follows:
Table sysptnext
Column Data Type Description
pe_partnum integer partnum for this partition
pe_extnum smallint extent number
pe_phys integer physical addr for this extent
pe_size integer size of this extent
pe_log integer logical page for start
View sysptprof - Tables IO profile
Column Data Type Description
dbsname char(18) database name
tabname char(18) table name
partnum integer partnum for this table
lockreqs integer lock requests
lockwts integer lock waits
deadlks integer deadlocks
lktouts integer lock timeouts
isreads integer reads
iswrites integer writes
isrewrites integer rewrites
isdeletes integer deletes
bufreads integer buffer reads
bufwrites integer buffer writes
seqscans integer sequential scans
pagreads integer disk reads
pagwrites integer disk writes
These tables allow us to develop scripts to display tables, the number of extents, and pages used. We
can also present a layout of dbspace, databases, tables, and extents similar to the command "tbcheck -
pe". And finally, we can show table usage statistics sorted by which tables have the most hits based
on reads, writes, or locks. These scripts will enable a DBA to monitor and tune the database server.
Extents are created when a table's initial space has been filled up and it needs more space. OnLine
will allocate additional space for a table. However, the table will no longer be contiguous, and
performance will start to degrade. Informix will display warning messages when a table reaches more
than 8 extents. Depending on a number of factors, at approximately 180-230 extents a table will not
be able to expand and no additional rows can be inserted. The following script lists all tables sorted
by the number of extents. The tables that show up with many extents may need to be unloaded and
rebuilt.
Figure 13. SQL script showing tables and extents
-- tabextent.sql - List tables, number of extents and size of table.
database sysmaster;
select dbsname,
tabname,
count(*) num_of_extents,
sum( pe_size ) total_size
from systabnames, sysptnext
where partnum = pe_partnum
group by 1, 2
order by 3 desc, 4 desc;
Sample Output
dbsname tabname num_of_extents total_size
rootdbs TBLSpace 8 400
sysmaster syscolumns 6 56
sunset inventory 3 376
sunset sales_items 3 96
sunset sales_header 3 48
sunset parts 3 48
sunset customer 3 40
sunset syscolumnext 3 32
sunset employee 3 32
Sometimes it is helpful to see how the tables are interspersed on disk. The following script lists by
dbspace each table and the location of each extent. This is similar to the output from "oncheck -pe".
Figure 14. SQL script showing table layout on chunks
-- tablayout.sql - Show layout of tables and extents
database sysmaster;
select dbinfo( "DBSPACE" , pe_partnum ) dbspace,
dbsname[1,10],
tabname,
pe_phys start,
pe_size size
from sysptnext, outer systabnames
where pe_partnum = partnum
order by dbspace, start;
Sample output
dbspace dbsname tabname start size
rootdbs rootdbs TBLSpace 1048589 50
rootdbs sysmaster sysdatabases 1050639 4
rootdbs sysmaster systables 1050643 8
rootdbs sysmaster syscolumns 1050651 16
rootdbs sysmaster sysindexes 1050667 8
rootdbs sysmaster systabauth 1050675 8
rootdbs sysmaster syscolauth 1050683 8
rootdbs sysmaster sysviews 1050691 8
rootdbs sysmaster sysusers 1050699 8
rootdbs sysmaster sysdepend 1050707 8
rootdbs sysmaster syssynonyms 1050715 8
IO Performance of Tables
Have you ever wanted to know which tables have the most reads, writes, or locks? The last script in
this article shows the performance profile of tables. By changing the columns displayed and the sort
order of the script, you can display the tables with the most reads, writes, or locks first.
Figure 15. SQL script show table I/O activity
-- tabprof.sql
database sysmaster;
select
dbsname,
tabname,
isreads,
-- pagwrites
-- uncomment the following to show locks
-- lockreqs,
-- lockwts,
-- deadlks
from sysptprof
order by isreads desc; -- change this sort to whatever you need to monitor.
Sample Output
dbsname tabname isreads bufreads pagreads
zip zip 334175 35876509 1111
sysmaster sysviews 259712 634102 1119
sysmaster systables 60999 240018 1878
zip systables 3491 8228 543
sysmaster sysusers 2406 8936 87
sysmaster sysprocauth 1276 5104 12
sunset systables 705 2251 26
sysmaster sysprocedures 640 2562 21
sysmaster syscolumns 637 1512 49
stores7 systables 565 1361 16
sysmaster sysdatabases 534 2073 902
bufreads,
pagreads
-- uncomment the following to show writes
-- iswrites,
-- bufwrites,
6. User Session Information
This last set of SMI tables deals with users and information about their sessions. These tables were
used in our example script "dbwho" at the beginning of this chapter.
syssessions - Session data
syssesprof - User statistics
syslocks - User Locks
syseswts - Wait times
User Session and Connection Information: syssessions
This view contains information from two shared memory structures, the user control and thread
control table. This tells you who is logged in to your server and some basic data about their session.
View syssessions
Column Data Type Description
sid integer Session id number
username char(8) User name
uid smallint User unix id
pid integer User process id
hostname char(16) Hostname
tty char(16) TTY port
connected integer Time user connected
feprogram char(16) Program name
pooladdr integer Pointer to private session pool
is_wlatch integer Flag 1=Yes, 0=No, wait on latch
is_wlock integer Flag 1=Yes, 0=No, wait on lock
is_wbuff integer Flag 1=Yes, 0=No, wait on buffer
is_wckpt integer Flag 1=Yes, 0=No, wait on checkpoint
is_wlogbuf integer Flag 1=Yes, 0=No, wait on log buffer
is_wtrans integer Flag 1=Yes, 0=No, wait on a transaction
is_monitor integer Flag 1=Yes, 0=No, a monitoring process
is_incrit integer Flag 1=Yes, 0=No, in crtical section
state integer Flags
The following is a quick query to tell who is using your server.
Figure 16. SQL script showing user sessions
-- sessions.sql
select sid,
username,
pid,
hostname,
l2date(connected) startdate -- convert unix time to date
from syssessions
Sample Output
sid username pid hostname startdate
47 lester 11564 merlin 07/14/1997
This next query list all users and their session status. The objective is to show who is blocked waiting
on another user, lock, or some other OnLine process. The five fields are yes/no flags where 1 = yes
and 0 = no. If all the fields are 0, then none of the sessions are blocked. In the following example, one
session is blocked waiting on a locked record.
Figure 17. SQL script users waiting on resources
-- seswait.sql
select username,
is_wlatch, -- blocked waiting on a latch
is_wlock, -- blocked waiting on a locked record or table
is_wbuff, -- blocked waiting on a buffer
is_wckpt, -- blocked waiting on a checkpoint
is_incrit -- session is in a critical section of transaction
-- (e.g writting to disk)
from syssessions
order by username;
Sample Output
username is_wlatch is_wlock is_wbuff is_wckpt is_incrit
lester 0 1 0 0 0
lester 0 0 0 0 0
lester 0 0 0 0 0
User Session Performance Statistics: syssesprof
This view syssesprof provides a way to find out at a given point in time how much of your server
resources each user is using. The view contains the following information.
View syssesprof
Column Data Type Description
sid integer, Session Id
lockreqs decimal(16,0) Locks requested
locksheld decimal(16,0) Locks held
lockwts decimal(16,0) Locks waits
deadlks decimal(16,0) Deadlocks detected
lktouts decimal(16,0) Deadlock timeouts
logrecs decimal(16,0) Logical Log records written
isreads decimal(16,0) Reads
iswrites decimal(16,0) Writes
isrewrites decimal(16,0) Rewrites
isdeletes decimal(16,0) Deletes
iscommits decimal(16,0) Commits
isrollbacks decimal(16,0) Rollbacks
longtxs decimal(16,0) Long transactions
bufreads decimal(16,0) Buffer reads
bufwrites decimal(16,0) Buffer writes
seqscans decimal(16,0) Sequential scans
pagreads decimal(16,0) Page reads
pagwrites decimal(16,0) Page writes
total_sorts decimal(16,0) Total sorts
dsksorts decimal(16,0) Sorts to disk
max_sortdiskspace decimal(16,0) Max space used by a sort
logspused decimal(16,0) Current log bytes used
maxlogsp decimal(16,0) Max bytes of logical logs used
This table contains data since the user logged on. Each time a user disconnects their data is lost so
you cannot use this data for charging the user for server usage. Also, when a DBA resets the server
statistics with the command "tbstat -z", all profile data is reset to zero.
I like to monitor the number of locks used by each user and their buffer usage. The following is an
example query.
Figure 19. SQL script to monitor resource usage by user
-- sesprof.sql
select username,
syssesprof.sid,
lockreqs,
bufreads,
bufwrites
from syssesprof, syssessions
where syssesprof.sid = syssessions.sid
order by bufreads desc
Active Locks on the Server: syslocks
This view contains information about all active locks on your server. It can be very large; if you have
a lot of users and your server is configured to handle a large number of locks, you could end up with
hundreds of thousands or more records in this view. This view is composed of six tables, and queries
on this view will create a temp table which is logged to your logical log. The performance may be a
bit slow because of the sheer volume of data produced by this view. However, the data this view
contains can be very helpful to understanding how your system is performing.
View syslocks
Column Data Type Description
dbsname char(18) Database name
tabname char(18) Table name
rowidlk integer Rowid for index key lock
keynum smallint Key number of index key lock
owner integer Session ID of lock owner
waiter integer Session ID of first waiter
type char(4) Type of Lock
Types of Locks
B - byte lock
IS - intent shared lock
S - shared lock
XS - repeatable read shared key
U - update lock
IX - intent exclusive lock
SIX - shared intent exclusive
X - exclusive lock
XR - repeatable read exclusive
Basically there are three types of locks: a shared lock (S), an exclusive lock (X), and an update
lock(U). A shared lock allows other users to also read the data but none may change it. An exclusive
lock does not allow anyone else to lock that data even in shared mode. An update lock prevents other
users from changing data while you are changing it.
There are six objects that can be locked in OnLine.
Database - Every user that opens a database places a shared lock on the database to prevent
someone else from dropping the database while it is in use. This shows up as a lock on the
sysmaster database and the sysdatabase tables, and the rowid will point to the record
containing database name.
Table - A table lock shows up as a lock on a table with a rowid of 0 and a keynum of 0.
Page - A page level lock shows as a rowid ending in 00. This means all the rows on that page
are locked.
Row - A row level lock will show with an actual rowid (not ending in 00).
Key - A key lock will show with a keynum. If a row has indexes that need to be updated this
will place locks on the indexes for that row.
One of the key data elements missing from this view is the username and session id (sid) of the user
who has a lock. The following query adds the user's name and session id and uses the underlying
tables to improve performance. It also puts the data into a temp table from which you can select
subsets of data much more quickly than if you were to repeat the query.
Figure 20. SQL script to show all locks
-- locks.sql
select dbsname,
b.tabname,
rowidr,
keynum,
e.txt type,
d.sid owner,
g.username ownername,
f.sid waiter,
h.username waitname
from syslcktab a,
systabnames b,
systxptab c,
sysrstcb d,
sysscblst g,
flags_text e,
outer ( sysrstcb f , sysscblst h )
where a.partnum = b.partnum
and a.owner = c.address
and c.owner = d.address
and a.wtlist = f.address
and d.sid = g.sid
and e.tabname = 'syslcktab'
and e.flags = a.type
and f.sid = h.sid
into temp A;
select dbsname,
tabname,
rowidr,
keynum,
type[1,4],
owner,
ownername ,
waiter,
waitname
from A;
Example SQL Output
dbsname sysmaster
tabname a
rowidr 0
keynum 0
type X
owner 47
ownername lester
waiter
waitname
The above example SQL output shows the row from syslocks that displays the exclusive lock I
created on the temp table "A" while running the query.
A more important use of this query is to find out when one user is waiting on the lock owned by
another user. When a user has a database object locked, the first user waiting on the object can be
displayed. (This will only occur when a user has set lock mode to WAIT). The following script
displays only the users that have locks where someone else is waiting on their process. There is one
key difference between this script and the one above. The tables sysrstcb and sysscblst in this script
do not use an outer join, so only rows that have waiters will be returned. In this example "linda" has
an update lock on a row and "lester" is waiting for that update to complete.
Figure 21. SQL script to show users waiting on locks
-- lockwaits.sql
database sysmaster;
select dbsname,
b.tabname,
rowidr,
keynum,
e.txt type,
d.sid owner,
g.username ownername,
f.sid waiter,
h.username waitname
from syslcktab a,
systabnames b,
systxptab c,
sysrstcb d,
sysscblst g,
flags_text e,
sysrstcb f , sysscblst h
where a.partnum = b.partnum
and a.owner = c.address
and c.owner = d.address
and a.wtlist = f.address
and d.sid = g.sid
and e.tabname = 'syslcktab'
and e.flags = a.type
and f.sid = h.sid
into temp A;
select dbsname,
tabname,
type[1,4],
owner,
ownername ,
waitname
from A;
SQL Output
dbsname tabname type owner ownername waitname
stores7 items U 29 linda lester
Wait Status and Times on Objects: sysseswts
This is a supported view that shows all sessions that are blocked and waiting on a database object. It
shows the amount of time a user has been waiting. On a well tuned system this table should be
empty. However, when the table is not empty, it provides useful information on what is causing your
performance to slow down.
View sysseswts
Column Data Type Description
sid integer Session ID
reason char(50) Description of reason for wait
numwaits integer Number of waits for this reason
cumtime float Cumulative wait time for this reason
maxtime integer Max wait time for this reason
7. Some Unsupported Extras
Several of the SMI tables are not documented and not officially supported. These could change in
future releases. Two additional unsupported tables I have found helpful are systrans and
syssqexplain.
User Transactions: systrans
Three of the fields in systrans are very helpful to determine what logical log number a transaction
began in, and the current logical log number in use by a transaction.
Key systrans fields
Column Data Type Description
tx_id integer pointer to transaction table
tx_logbeg integer transaction starting logical log
tx_loguniq integer transaction current logical log number
This can be used to create a script to determine what logical log files have active transactions. The
output of this will tell you what logical logs are free and available for reuse. This first script lists all
user transactions and what logs they are using.
Figure 22. SQL script to display transactions and logs used
-- txlogpos.sql
select
t.username,
t.sid,
tx_logbeg,
tx_loguniq,
tx_logpos
from systrans x, sysrstcb t
where tx_owner = t.address
SQL Output
username sid tx_logbeg tx_loguniq tx_logpos
informix 1 0 16 892952
informix 0 0 0 0
informix 8 0 0 0
lester 53 0 0 0
informix 12 0 0 0
lester 51 14 16 0
This shows that my logical logs numbered 14 to 16 are in use by transactions.
Another helpful use of this view is to summarize the transactions by logical logs. This next script
show my transaction status by logical log.
Figure 23. SQL script to view logical logs status
-- logstat.sql
database sysmaster;
-- select transaction data into a temp table
select tx_logbeg, tx_loguniq
from systrans
into temp b;
-- count how may transactions begin in each log
select tx_logbeg, count(*) cnt
from B
where tx_logbeg > 0
group by tx_logbeg
into temp C;
-- count how many transactions currently are in each log
select tx_loguniq, count(*) cnt
from B
where tx_loguniq > 0
group by tx_loguniq
into temp D;
-- join data from counts with syslogs
select
uniqid,
size,
is_backed_up, -- 0 = no, 1 = yes log is backed up
is_archived, -- 0 = no, 1 = yes log is on last archive
c.cnt tx_beg_cnt,
d.cnt tx_curr_cnt
from syslogs, outer c, outer D
where uniqid = c.tx_logbeg
and uniqid = d.tx_loguniq
order by uniqid
SQL Output
uniqid size is_backed_up is_archived tx_beg_cnt tx_curr_cnt
10 500 1 1
11 500 1 1
12 500 1 1
13 500 1 1
14 500 1 1
15 500 1 1
16 500 0 1 1 2
This shows that all logs are backed up except the current one, and it has two active transactions.
User Queries: syssqexplain
Have you ever wanted to run a query to see what your users were doing? The view syssqexplain
contains some of the data from a user's session, including the sql that they are currently executing.
Try this query on your system sometime to see your user's SQL.
Figure 24. SQL to view current executing SQL
-- syssql.sql
select username,
sqx_sessionid,
sqx_conbno,
sqx_sqlstatement
from syssqexplain, sysscblst
where sqx_sessionid = sid
SQL Output
username lester
sqx_sessionid 55
sqx_conbno 2
sqx_sqlstatement select username,sqx_sessionid, sqx_conbno, sqx_sqlstatement
from syssqexplain, sysscblst
where sqx_sessionid = sid
username lester
sqx_sessionid 51
sqx_conbno 0
sqx_sqlstatement update items set total_price = 300 where item_num = 1
Conclusion
The sysmaster database is a great tool for a DBA to monitor the Informix server. If you have any
questions or suggestions please send me E-mail at lester@advancedatatools.com. Also, if you have
any creative scripts for monitoring your server with the sysmaster database, please send them in and I
may include them in the future publications
VAX File Parser
Detailed Design/System Specifications
Version 0.2
Prepared By: Victor Mandujano
Copyright 2014 Livingston International unpublished. All rights reserved. This document contains proprietary and confidential
information of Livingston International. Reproduction, disclosure, or use of any portion of this document without specific written
authorization from Livingston is strictly prohibited. This restriction applies to the information on every page of the document.
Contents may be disclosed only to authorized Livingston International employees and consultants for the purpose of performing
their job responsibilities
<LOB> - <Project Name>
db9d4f33214f80b60cec0ca3e590df64
Printed 3/17/2021
db9d4f33214f80b60cec0ca3e590df64
Page 320 of 362
© Livingston International 2014 - All Rights Reserved CONFIDENTIAL AND PROPRIETARY TO Livingston International
For Internal Use Only - Do Not Duplicate
Table of Contents
1 DOCUMENT ACCEPTANCE AND SIGN-OFF ................................................................................. 321
2 REVISION HISTORY ........................................................................................................................ 322
3 DOCUMENT PURPOSE ................................................................................................................... 323
4 DESIGN ASSUMPTIONS AND DEPENDENCIES .................. ERROR! BOOKMARK NOT DEFINED.
5 DESIGN CONSIDERATIONS AND CONSTRAINTS............... ERROR! BOOKMARK NOT DEFINED.
6 COMPONENT/APPLICATION (PROCESS) DESIGN ............. ERROR! BOOKMARK NOT DEFINED.
7 ISSUES & ACTION ITEMS ............................................................................................................... 329
8 GLOSSARY ................................................................................................................................... 329
9 APPENDIX ..................................................................................................................................... 330
<LOB> - <Project Name>
db9d4f33214f80b60cec0ca3e590df64
Printed 3/17/2021
db9d4f33214f80b60cec0ca3e590df64
Page 321 of 362
© Livingston International 2014 - All Rights Reserved CONFIDENTIAL AND PROPRIETARY TO Livingston International
For Internal Use Only - Do Not Duplicate
Document Acceptance and Sign-off
By signing below I acknowledge that I have read the entire contents of this document and accept the
document in this form as reasonably fulfilling the goals described in the section titled Document
Purpose. I further agree that this will constitute the document of record and cannot be changed
without review and acknowledgement of the groups shown below:
Group / Role
Approver Name
Approver Signature
Date
Approved
<LOB> - <Project Name>
db9d4f33214f80b60cec0ca3e590df64
Printed 3/17/2021
db9d4f33214f80b60cec0ca3e590df64
Page 322 of 362
© Livingston International 2014 - All Rights Reserved CONFIDENTIAL AND PROPRIETARY TO Livingston International
For Internal Use Only - Do Not Duplicate
Revision History
Document/Department Editor:
Date
Revision #
Editor
Description of Change
09/01/2015
0.1
Victor Mandujano
Initial draft
09/10/2015
0.2
Victor Mandujano
Minor changes to the error log.
<LOB> - <Project Name>
db9d4f33214f80b60cec0ca3e590df64
Printed 3/17/2021
db9d4f33214f80b60cec0ca3e590df64
Page 323 of 362
© Livingston International 2014 - All Rights Reserved CONFIDENTIAL AND PROPRIETARY TO Livingston International
For Internal Use Only - Do Not Duplicate
Document Purpose
Describe the process to extract information from LOCUS’ funnel (*.VAX) files; parsing of the source files
and extract relevant data in order to enable data extraction from the ICC CA Informix database.
Design Assumptions and Dependencies
VAX files are generated from regular operations in LOCUS;
o Files are committed and made available every 15 minutes.
o A job pulls the files into a staging area where Tuxedo is constantly looking for files.
o Tuxedo parses each file and treats each transaction independently.
o Tuxedo commits the transactions to the database.
o It is unknown at this stage if further processing/transformation is done by Tuxedo.
The process described on this document will be executed after the ICC CA database is updated.
Each line on the .VAX file represents multiple transactions.
o Each transaction may be linked to others through the combination of multiple primary
keys.
The database tables and data to be extracted from the funnel files will be configurable and the
support/operations team will provide guidance to configure which tables and keys (transactions)
are required.
Design Considerations and Constraints
This process is limited to extract the keys to enable a process to perform the extraction and is not
meant to replace the existing Tuxedo job to load the data to ICC CA.
Component/Application (Process) Design
<LOB> - <Project Name>
db9d4f33214f80b60cec0ca3e590df64
Printed 3/17/2021
db9d4f33214f80b60cec0ca3e590df64
Page 324 of 362
© Livingston International 2014 - All Rights Reserved CONFIDENTIAL AND PROPRIETARY TO Livingston International
For Internal Use Only - Do Not Duplicate
Process Model Diagram
LOCUS
BELLAT
LOCUS
CASTOR
LOCUS
ALTAIR
Funnel Files
LOCUS ICC Canada Data Solution for MLP Reporting
Interim State
.vax
TUXEDO
FTP
Informix
Transactions
(SQL)
Insight Canada
ADO
Client
HTTPS
Livingston Premises
AWS VPC
New Funnel
File Parsing
Process
copy
Flat file with Key
values per table
(CSV)
4GL Job
Data
Extraction
feed
SQL/ODBC
Transaction Data
Files
(CSV)
Amazon S3
Secure Copy
SCP
Amazon RDS Data Load feedSQL
MLP
SQL
https
QlikView
Process Descriptions
6.2.1 The funnel files
A funnel file consists of many transactions grouped into several lines. Every line is composed of the
following position-delimited structure:
Position
Size
Field
[1:12]
Message Header
Message
Header
Message
Type
Header
Record
Header
(1)
Record data (1) ...
Record
Header
(n)
Record data (n)
<LOB> - <Project Name>
db9d4f33214f80b60cec0ca3e590df64
Printed 3/17/2021
db9d4f33214f80b60cec0ca3e590df64
Page 325 of 362
© Livingston International 2014 - All Rights Reserved CONFIDENTIAL AND PROPRIETARY TO Livingston International
For Internal Use Only - Do Not Duplicate
[1: 6]
6
Date (YYMMDD)
[7:12]
6
Message length (Msg overheads + Record overheads + Record Data)
[13:104]
Message Type Header
[13:26]
14
Timestamp
[27:38]
12
OS Username
[39:44]
6
Source machine node name
[45:52]
8
File identification
[53:56]
4
Source Machine Type
[57:60]
4
Business Txn Type
[61:64]
4
Msg Number
[65:68]
4
Contine / End flag
[69:72]
4
Txn Action Code
[73:76]
4
Mass Update Flag
[77:80]
4
Msg Version / Release #
[81:88]
8
Msg Thread ID
[89:96]
8
Msg Context
[97:104]
8
Msg Record Data Length (Msg data length + DMQ Record overheads)
[105:124]
Record Header
[105:112]
8
Record Number
[113:116]
4
Record Type
[117:120]
4
Record Length
[121:124]
4
Record Action Code
Within the Record Header, each action code corresponds to a different action as described in the
following table:
Action Code
51
Insert
53
Update
54
Delete
For further reference on the VAX files, see the following document:
LOCUS_ICC_Format.x
lsx
LOCUS_ICC_FORMAT.xlsx document tabs:
LOCUS_IFX_Map contains the mappings from the LOCUS Table and positions within the file vs
the Informix Table/Fields.
TX Type Map contains the mappings on the source LOCUS record type to the type code in the
file headers.
HDR Mappings contains the file positional header mappings (Message, Header Type, Record).
<LOB> - <Project Name>
db9d4f33214f80b60cec0ca3e590df64
Printed 3/17/2021
db9d4f33214f80b60cec0ca3e590df64
Page 326 of 362
© Livingston International 2014 - All Rights Reserved CONFIDENTIAL AND PROPRIETARY TO Livingston International
For Internal Use Only - Do Not Duplicate
6.3 The VAX file parser
The VAX file parser:
Parses the files
Identifies the transactions based on metadata present in the headers.
Extracts reference keys and store them by transaction type.
Translates the transaction type to the Informix table name.
It performs the following actions for each .vaxtoken file in the input_files folder:
1. Reads the .vaxtoken file. If it does not contain the string ‘transfer completed’, ignores the
associated .vax file. Otherwise, locate the associated .vax file and performs steps 2-5.
2. Reads the Message Header and validates the declared message length against its actual length.
3. Reads the Message Type Header and validates the declared message record data length against
its actual length.
4. If the declared lengths are correct, for each record present in the current line:
a. Creates a record object based on:
i. Record Type
ii. Action Code
iii. Record identifier (consisting of the defined key set for the given record type and
their actual values)
b. While the parsing is being performed, the log files are created.
5. When all the lines are finally parsed, the output files are created.
Process Steps:
Flow
ID
Name
Description
A1
PROCESS BEGIN
Verify the application folder. It must contain
the following structure:
Application Folder/
input_files/
processed_files/
logs/
output_files/
keyFile.txt
VAXFilesParser.jar
A2
Verify the keyFile.txt to be in proper format
Refer to the Key File section
of this document.
A3
Place the .VAX and .VAXTOKEN files
inside the input_files folder.
Refer to the VAX files
section of this document.
A4
Run the application VAXFilesParser.jar
A5
Processed files are moved to the
processed_files folder. Incomplete files are
ignored.
A6
Verify the output_files folder. It should
contain the newly created output files.
A7
PROCESS END
Verify log files for additional information on
Refer to the Log files section
<LOB> - <Project Name>
db9d4f33214f80b60cec0ca3e590df64
Printed 3/17/2021
db9d4f33214f80b60cec0ca3e590df64
Page 327 of 362
© Livingston International 2014 - All Rights Reserved CONFIDENTIAL AND PROPRIETARY TO Livingston International
For Internal Use Only - Do Not Duplicate
Flow
ID
Name
Description
the parsing.
of this document.
6.3.1 The Key file
It specifies the keys needed to be present in the output file (per record type). Every line in the key file
consists of a pipe delimited array, which in turn consists of:
Record Type. (e.g. 112)
Informix column name. (e.g. liibrchno)
Initial position. (e.g. 34)
Length. (e.g. 10)
‘Record type’ is a number corresponding to the record type identifier, as described in the
LOCUS_ICC_FORMAT.xlsx document. Refer to the TX Type Map tab.
‘Informix column name’ is the database table which the given record type is mapped to, as described in
the LOCUS_ICC_FORMAT.xlsx document. Refer to the TX Type Map tab.
‘Initial position’ is a number corresponding to the position of the key in the ‘record’ section of the
transaction, considering the first ‘record’ character as the 0-index char.
‘Length’ is the number of characters in the key.
Every line in the key file must be written following the format:
RecordType|InformixColumnName|InitialPosition|Length
If multiple keys must be defined for the same record type, those keys must be provided in separate lines,
each of which will contain the same record type.
i.e.
321|hsno|0|10
321|hstarifftrtmnt|10|2
321|effdate|12|8
6.3.2 The output files
One output file will be generated for every input file in the input_files folder. The output file will be named
after the input file it was created from:
VAXParser_[InputFileName].txt
The output files consist of comma-separated arrays; composed of the record metadata elements,
following the format:
[Informix Table],[Informix Column 1=Value], [Informix Column 2=Value], [Informix Column n=Value]
i.e.
B3,liibrchno=310,liirefno=869376
<LOB> - <Project Name>
db9d4f33214f80b60cec0ca3e590df64
Printed 3/17/2021
db9d4f33214f80b60cec0ca3e590df64
Page 328 of 362
© Livingston International 2014 - All Rights Reserved CONFIDENTIAL AND PROPRIETARY TO Livingston International
For Internal Use Only - Do Not Duplicate
HS_DUTY_RATE,hsno=7308300022,hstarifftrtmnt=02,effdate=20150101
CLAIM_LOG,b3acctsecurno=10827,b3transno=654858467,b3transseqno=00
6.3.3 The log files
A log file will be generated for every input file in the input_files folder. The log file will be named after the
.vax file it was created from:
VAXParser_[InputFileName].log
The first section of the log file will display the specified key map for reference.
The second section consists of the line parsing log. For each line in the document, the logging process
will perform the following actions:
1. Check if the declared lengths match the actual line lengths. If they do, print a confirmation
message.
2. If the record type is supported by the specified key map, for each transaction in the line:
a. Print the record type.
b. Print the record length.
c. Print the action code.
d. Print the transaction identifier.
3. If the record type is not supported by the specified key map, print an error message. ‘Record type
XXX is not supported by the current Key Map.’ This error will also be printed to the logs/error.log
file.
User should always check the logs/error.log file after the execution of a parsing process in order to
identify whether all the records in the files were supported by the provided key map. In case they were not
supported, they will not be included in the output files.
User may modify the key map file to include the missing record types and then run the program again.
Error messages standard codes
Error/Warning
code
Description
E01
No key file found under the root directory.
E02
The key file is malformed.
E03
Only .vax files allowed in the input_file folder.
E04
Line is invalid. Declared message lengths do not match the actual length.
W01
A record is not supported by the provided key map.
W02
No files to process under the input_files folder.
W03
File is potentially incomplete. Ignoring file for this execution.
W04
Vax file could not be moved to the processed_files folder.
6.3.4 Not covered by this document
The following process in the business flow should contain functionality to:
1. Automatically copy funnel files into the input_files folder as they get generated.
2. Create a program that runs the VAX parser automatically every X minutes.
3. Automatically copy parsed files to a fixed destination after the process is complete.
4. Take the parsed files and load data into the Amazon RDS database.
<LOB> - <Project Name>
db9d4f33214f80b60cec0ca3e590df64
Printed 3/17/2021
db9d4f33214f80b60cec0ca3e590df64
Page 329 of 362
© Livingston International 2014 - All Rights Reserved CONFIDENTIAL AND PROPRIETARY TO Livingston International
For Internal Use Only - Do Not Duplicate
Issues & Action Items
Specify any outstanding issues/questions relating to the document, the person the issue/question is
assigned to, due date, and associated resolution(s).
ID
Question / Issue
Owner/
Resolver
Due
Date
Priority
Resolution/Progress
1.
2.
3.
4.
5.
Glossary
Term
Definition
<LOB> - <Project Name>
db9d4f33214f80b60cec0ca3e590df64
Printed 3/17/2021
db9d4f33214f80b60cec0ca3e590df64
Page 330 of 362
© Livingston International 2014 - All Rights Reserved CONFIDENTIAL AND PROPRIETARY TO Livingston International
For Internal Use Only - Do Not Duplicate
Appendix
Example
Example files of a complete process flow are attached below.
Input File (.vax)
32063015001603.VAX
Status file (.vaxtoken)
32063015001603.vaxt
oken
Key file
keyFile.txt
Output file
VAXParser_32063015
001603.txt
Log file
VAXParser_32063015
001603.log
run xwin on laptop as root,
1. start xming server on laptop
2. connect to AIX via ssh with xauthority enabled
3. follow the steps:
Start Nagios
nagios@ifx01:/usr/local/nagios># cat start_nrpe.ksh
#!/usr/bin/ksh
nohup /usr/local/nagios/bin/nrpe -c /usr/local/nagios/etc/nrpe.cfg -n -d >/dev/null 2>&1
nagios@ifx01:/usr/local/nagios># ./start_nrpe.ksh
nagios@ifx01:/usr/local/nagios># ps -ef|grep nagios
nagios 14287080 1 0 09:58:48 - 0:00 /usr/local/nagios/bin/nrpe -c /usr/local/nagios/etc/nrpe.cfg -n -d
Configure network tuning parameters
Use the no command to configure network tuning parameters. The no command sets or displays current or next
boot values for network tuning parameters. This command can also make permanent changes or defer changes
until the next reboot. Whether the command sets or displays a parameter is determined by the accompanying flag.
The -o flag performs both actions. It can either display the value of a parameter or set a new value for a parameter.
When the no command is used to modify a network option it logs a message to the syslog using the LOG_KERN
facility. For a more information on how the network parameters interact with each other, refer to the Networks and
communication management
[lchen@ifx01 /home/lchen] $ no -a
arpqsize = 1024
arpt_killc = 20
arptab_bsiz = 7
arptab_nb = 149
bcastping = 0
bsd_loglevel = 3
clean_partial_conns = 0
delayack = 0
delayackports = {}
dgd_flush_cached_route = 0
dgd_packets_lost = 3
dgd_ping_time = 5
dgd_retry_time = 5
directed_broadcast = 0
fasttimo = 200
hstcp = 0
icmp6_errmsg_rate = 10
icmpaddressmask = 0
ie5_old_multicast_mapping = 0
ifsize = 256
igmpv2_deliver = 0
init_high_wat = 0
ip6_defttl = 64
ip6_prune = 1
ip6forwarding = 0
ip6srcrouteforward = 1
ip_ifdelete_notify = 0
ip_nfrag = 200
ipforwarding = 0
ipfragttl = 2
ipignoreredirects = 0
ipqmaxlen = 100
ipsendredirects = 1
ipsrcrouteforward = 1
ipsrcrouterecv = 0
ipsrcroutesend = 1
limited_ss = 0
llsleep_timeout = 3
lo_perf = 1
lowthresh = 90
main_if6 = 0
main_site6 = 0
maxnip6q = 20
maxttl = 255
medthresh = 95
mpr_policy = 1
multi_homed = 1
<LOB> - <Project Name>
db9d4f33214f80b60cec0ca3e590df64
Printed 3/17/2021
db9d4f33214f80b60cec0ca3e590df64
Page 333 of 362
© Livingston International 2014 - All Rights Reserved CONFIDENTIAL AND PROPRIETARY TO Livingston International
For Internal Use Only - Do Not Duplicate
nbc_limit = 524288
nbc_max_cache = 131072
nbc_min_cache = 1
nbc_ofile_hashsz = 12841
nbc_pseg = 0
nbc_pseg_limit = 1048576
ndd_event_name = {all}
ndd_event_tracing = 0
ndogthreads = 0
ndp_mmaxtries = 3
ndp_umaxtries = 3
ndpqsize = 50
ndpt_down = 3
ndpt_keep = 120
ndpt_probe = 5
ndpt_reachable = 30
ndpt_retrans = 1
net_buf_size = {all}
net_buf_type = {all}
net_malloc_frag_mask = {0}
netm_page_promote = 1
nonlocsrcroute = 0
nstrpush = 8
passive_dgd = 0
pmtu_default_age = 10
pmtu_expire = 10
pmtu_rediscover_interval = 30
psebufcalls = 20
psecache = 1
psetimers = 20
rfc1122addrchk = 0
rfc1323 = 0
rfc2414 = 1
route_expire = 1
routerevalidate = 0
rtentry_lock_complex = 0
rto_high = 64
rto_length = 13
rto_limit = 7
rto_low = 1
sack = 0
sb_max = 1048576
send_file_duration = 300
site6_index = 0
sockthresh = 85
sodebug = 0
sodebug_env = 0
somaxconn = 1024
strctlsz = 1024
strmsgsz = 0
strthresh = 85
strturncnt = 15
subnetsarelocal = 1
tcp_bad_port_limit = 0
tcp_cwnd_modified = 0
tcp_ecn = 0
tcp_ephemeral_high = 65535
tcp_ephemeral_low = 32768
<LOB> - <Project Name>
db9d4f33214f80b60cec0ca3e590df64
Printed 3/17/2021
db9d4f33214f80b60cec0ca3e590df64
Page 334 of 362
© Livingston International 2014 - All Rights Reserved CONFIDENTIAL AND PROPRIETARY TO Livingston International
For Internal Use Only - Do Not Duplicate
tcp_fastlo = 0
tcp_fastlo_crosswpar = 0
tcp_finwait2 = 1200
tcp_icmpsecure = 0
tcp_init_window = 0
tcp_inpcb_hashtab_siz = 24499
tcp_keepcnt = 8
tcp_keepidle = 3600 #(ipdev)tcp_keepidle = 14400
tcp_keepinit = 150
tcp_keepintvl = 150
tcp_limited_transmit = 1
tcp_low_rto = 0
tcp_maxburst = 0
tcp_mssdflt = 1460
tcp_nagle_limit = 65535
tcp_nagleoverride = 0
tcp_ndebug = 100
tcp_newreno = 1
tcp_nodelayack = 1
tcp_pmtu_discover = 1
tcp_recvspace = 16384
tcp_sendspace = 16384
tcp_tcpsecure = 0
tcp_timewait = 1
tcp_ttl = 60
tcprexmtthresh = 3
tcptr_enable = 0
thewall = 2097152
timer_wheel_tick = 0
tn_filter = 1
udp_bad_port_limit = 0
udp_ephemeral_high = 65535
udp_ephemeral_low = 32768
udp_inpcb_hashtab_siz = 24499
udp_pmtu_discover = 1
udp_recvspace = 42080
udp_sendspace = 9216
udp_ttl = 30
udpcksum = 1
use_sndbufpool = 1
root@ipdev:/># no -L
General Network Parameters
--------------------------------------------------------------------------------
NAME CUR DEF BOOT MIN MAX UNIT TYPE
DEPENDENCIES
--------------------------------------------------------------------------------
bsd_loglevel 3 3 3 0 7 numeric D
--------------------------------------------------------------------------------
fasttimo 200 200 200 50 200 millisecond D
--------------------------------------------------------------------------------
init_high_wat 0 0 0 0 10 %_of_thewall D
--------------------------------------------------------------------------------
nbc_limit 512K 512K 512K 0 8E-1 kbyte D
thewall
--------------------------------------------------------------------------------
nbc_max_cache 128K 128K 128K 1 512M byte D
nbc_min_cache
<LOB> - <Project Name>
db9d4f33214f80b60cec0ca3e590df64
Printed 3/17/2021
db9d4f33214f80b60cec0ca3e590df64
Page 335 of 362
© Livingston International 2014 - All Rights Reserved CONFIDENTIAL AND PROPRIETARY TO Livingston International
For Internal Use Only - Do Not Duplicate
nbc_limit
--------------------------------------------------------------------------------
nbc_min_cache 1 1 1 1 128K byte D
nbc_max_cache
--------------------------------------------------------------------------------
nbc_ofile_hashsz 12841 12841 12841 1 999999 segment D
--------------------------------------------------------------------------------
nbc_pseg 0 0 0 0 2G-1 segment D
--------------------------------------------------------------------------------
nbc_pseg_limit 1M 1M 1M 0 2M kbyte D
--------------------------------------------------------------------------------
ndd_event_name {all} {all} {all} 0 128 string D
--------------------------------------------------------------------------------
ndd_event_tracing 0 0 0 0 64K-1 numeric D
--------------------------------------------------------------------------------
net_buf_size {all} {all} {all} 0 128 string D
--------------------------------------------------------------------------------
net_buf_type {all} {all} {all} 0 128 string D
--------------------------------------------------------------------------------
net_malloc_frag_mask {0} {0} {0} 0 128 string D
--------------------------------------------------------------------------------
netm_page_promote 1 1 1 0 1 numeric D
--------------------------------------------------------------------------------
sb_max 1M 1M 1M 4K 8E-1 byte D
--------------------------------------------------------------------------------
send_file_duration 300 300 300 0 4G-1 second D
--------------------------------------------------------------------------------
sockthresh 85 85 85 0 100 %_of_thewall D
--------------------------------------------------------------------------------
sodebug 0 0 0 0 1 boolean C
--------------------------------------------------------------------------------
sodebug_env 0 0 0 0 1 boolean C
--------------------------------------------------------------------------------
somaxconn 1K 1K 1K 0 32K-1 numeric C
--------------------------------------------------------------------------------
tcp_inpcb_hashtab_siz 24499 24499 24499 1 999999 numeric R
--------------------------------------------------------------------------------
tcptr_enable 0 0 0 0 1 boolean C
--------------------------------------------------------------------------------
thewall 2M 2M 2M 0 64M kbyte S
--------------------------------------------------------------------------------
udp_inpcb_hashtab_siz 24499 24499 24499 1 83000 numeric R
--------------------------------------------------------------------------------
use_sndbufpool 1 1 1 0 1 boolean R
--------------------------------------------------------------------------------
TCP Network Tunable Parameters
--------------------------------------------------------------------------------
NAME CUR DEF BOOT MIN MAX UNIT TYPE
DEPENDENCIES
--------------------------------------------------------------------------------
clean_partial_conns 0 0 0 0 1 boolean D
--------------------------------------------------------------------------------
delayack 0 0 0 0 3 boolean D
--------------------------------------------------------------------------------
delayackports {} {} {} 0 10 ports_list D
--------------------------------------------------------------------------------
hstcp 0 0 0 0 1 boolean D
<LOB> - <Project Name>
db9d4f33214f80b60cec0ca3e590df64
Printed 3/17/2021
db9d4f33214f80b60cec0ca3e590df64
Page 336 of 362
© Livingston International 2014 - All Rights Reserved CONFIDENTIAL AND PROPRIETARY TO Livingston International
For Internal Use Only - Do Not Duplicate
--------------------------------------------------------------------------------
limited_ss 0 0 0 0 100 numeric D
--------------------------------------------------------------------------------
rfc1323 0 0 0 0 1 boolean C
--------------------------------------------------------------------------------
rfc2414 1 1 1 0 1 boolean C
--------------------------------------------------------------------------------
rto_high 64 64 64 2 8E-1 roundtriptime R
rto_low
--------------------------------------------------------------------------------
rto_length 13 13 13 1 64 roundtriptime R
--------------------------------------------------------------------------------
rto_limit 7 7 7 1 64 roundtriptime R
rto_high
rto_low
--------------------------------------------------------------------------------
rto_low 1 1 1 1 63 roundtriptime R
rto_high
--------------------------------------------------------------------------------
sack 0 0 0 0 1 boolean C
--------------------------------------------------------------------------------
tcp_bad_port_limit 0 0 0 0 8E-1 numeric D
--------------------------------------------------------------------------------
tcp_cwnd_modified 0 0 0 0 1 boolean C
--------------------------------------------------------------------------------
tcp_ecn 0 0 0 0 1 boolean C
--------------------------------------------------------------------------------
tcp_ephemeral_high 64K-1 64K-1 64K-1 32K+1 64K-1 numeric D
tcp_ephemeral_low
--------------------------------------------------------------------------------
tcp_ephemeral_low 32K 32K 32K 1K 65534 numeric D
tcp_ephemeral_high
--------------------------------------------------------------------------------
tcp_fastlo 0 0 0 0 1 boolean C
--------------------------------------------------------------------------------
tcp_fastlo_crosswpar 0 0 0 0 1 boolean C
--------------------------------------------------------------------------------
tcp_finwait2 1200 1200 1200 0 32K-1 halfsecond D
--------------------------------------------------------------------------------
tcp_icmpsecure 0 0 0 0 1 boolean D
--------------------------------------------------------------------------------
tcp_init_window 0 0 0 0 32K-1 byte C
--------------------------------------------------------------------------------
tcp_keepcnt 8 8 8 0 32K-1 numeric D
--------------------------------------------------------------------------------
tcp_keepidle 14400 14400 14400 1 32K-1 halfsecond C
--------------------------------------------------------------------------------
tcp_keepinit 150 150 150 1 32K-1 halfsecond D
--------------------------------------------------------------------------------
tcp_keepintvl 150 150 150 1 32K-1 halfsecond C
--------------------------------------------------------------------------------
tcp_limited_transmit 1 1 1 0 1 boolean D
--------------------------------------------------------------------------------
tcp_low_rto 0 0 0 0 3000 numeric D
timer_wheel_tick
--------------------------------------------------------------------------------
tcp_maxburst 0 0 0 0 32K-1 numeric D
--------------------------------------------------------------------------------
<LOB> - <Project Name>
db9d4f33214f80b60cec0ca3e590df64
Printed 3/17/2021
db9d4f33214f80b60cec0ca3e590df64
Page 337 of 362
© Livingston International 2014 - All Rights Reserved CONFIDENTIAL AND PROPRIETARY TO Livingston International
For Internal Use Only - Do Not Duplicate
tcp_mssdflt 1460 1460 1460 1 64K-1 byte C
--------------------------------------------------------------------------------
tcp_nagle_limit 64K-1 64K-1 64K-1 0 64K-1 byte D
--------------------------------------------------------------------------------
tcp_nagleoverride 0 0 0 0 1 boolean D
--------------------------------------------------------------------------------
tcp_ndebug 100 100 100 0 32K-1 numeric D
--------------------------------------------------------------------------------
tcp_newreno 1 1 1 0 1 boolean D
--------------------------------------------------------------------------------
tcp_nodelayack 1 0 0 0 1 boolean D
--------------------------------------------------------------------------------
tcp_recvspace 16K 16K 16K 4K 8E-1 byte C
sb_max
--------------------------------------------------------------------------------
tcp_sendspace 16K 16K 16K 4K 8E-1 byte C
sb_max
--------------------------------------------------------------------------------
tcp_tcpsecure 0 0 0 0 7 numeric D
--------------------------------------------------------------------------------
tcp_timewait 1 1 1 1 5 15_second D
--------------------------------------------------------------------------------
tcp_ttl 60 60 60 1 255 0.6_second C
--------------------------------------------------------------------------------
tcprexmtthresh 3 3 3 1 32K-1 numeric D
--------------------------------------------------------------------------------
timer_wheel_tick 0 0 0 0 100 numeric R
--------------------------------------------------------------------------------
UDP Network Tunable Parameters
--------------------------------------------------------------------------------
NAME CUR DEF BOOT MIN MAX UNIT TYPE
DEPENDENCIES
--------------------------------------------------------------------------------
udp_bad_port_limit 0 0 0 0 8E-1 numeric D
--------------------------------------------------------------------------------
udp_ephemeral_high 64K-1 64K-1 64K-1 32K+1 64K-1 numeric D
udp_ephemeral_low
--------------------------------------------------------------------------------
udp_ephemeral_low 32K 32K 32K 1K 65534 numeric D
udp_ephemeral_high
--------------------------------------------------------------------------------
udp_recvspace 42080 42080 42080 4K 8E-1 byte C
sb_max
--------------------------------------------------------------------------------
udp_sendspace 9K 9K 9K 4K 8E-1 byte C
sb_max
--------------------------------------------------------------------------------
udp_ttl 30 30 30 1 255 second C
--------------------------------------------------------------------------------
udpcksum 1 1 1 0 1 boolean D
--------------------------------------------------------------------------------
IP Network Tunable Parameters
--------------------------------------------------------------------------------
NAME CUR DEF BOOT MIN MAX UNIT TYPE
DEPENDENCIES
--------------------------------------------------------------------------------
<LOB> - <Project Name>
db9d4f33214f80b60cec0ca3e590df64
Printed 3/17/2021
db9d4f33214f80b60cec0ca3e590df64
Page 338 of 362
© Livingston International 2014 - All Rights Reserved CONFIDENTIAL AND PROPRIETARY TO Livingston International
For Internal Use Only - Do Not Duplicate
directed_broadcast 0 0 0 0 1 boolean D
--------------------------------------------------------------------------------
ie5_old_multicast_mapping 0 0 0 0 1 boolean D
--------------------------------------------------------------------------------
ip6_defttl 64 64 64 1 255 numeric D
--------------------------------------------------------------------------------
ip6_prune 1 1 1 1 8E-1 second D
--------------------------------------------------------------------------------
ip6forwarding 0 0 0 0 1 boolean D
--------------------------------------------------------------------------------
ip6srcrouteforward 1 1 1 0 1 boolean D
--------------------------------------------------------------------------------
ip_ifdelete_notify 0 0 0 0 1 boolean D
--------------------------------------------------------------------------------
ip_nfrag 200 200 200 1 32K-1 byte D
--------------------------------------------------------------------------------
ipforwarding 0 0 0 0 1 boolean D
--------------------------------------------------------------------------------
ipfragttl 2 2 2 1 255 halfsecond D
--------------------------------------------------------------------------------
ipignoreredirects 0 0 0 0 1 boolean D
--------------------------------------------------------------------------------
ipqmaxlen 100 100 100 100 2G-1 numeric R
--------------------------------------------------------------------------------
ipsendredirects 1 1 1 0 1 boolean D
--------------------------------------------------------------------------------
ipsrcrouteforward 1 1 1 0 1 boolean D
--------------------------------------------------------------------------------
ipsrcrouterecv 0 0 0 0 1 boolean D
--------------------------------------------------------------------------------
ipsrcroutesend 1 1 1 0 1 boolean D
--------------------------------------------------------------------------------
lo_perf 1 1 1 0 1 boolean R
--------------------------------------------------------------------------------
maxnip6q 20 20 20 1 32K-1 numeric D
--------------------------------------------------------------------------------
multi_homed 1 1 1 0 3 boolean D
--------------------------------------------------------------------------------
ndogthreads 0 0 0 0 1K numeric D
--------------------------------------------------------------------------------
nonlocsrcroute 0 0 0 0 1 boolean D
--------------------------------------------------------------------------------
subnetsarelocal 1 1 1 0 1 boolean D
--------------------------------------------------------------------------------
tn_filter 1 1 1 0 1 boolean D
--------------------------------------------------------------------------------
ARP/NDP Network Tunable Parameters
--------------------------------------------------------------------------------
NAME CUR DEF BOOT MIN MAX UNIT TYPE
DEPENDENCIES
--------------------------------------------------------------------------------
arpqsize 1K 1K 1K 1 32K-1 numeric D
tcp_pmtu_discover
udp_pmtu_discover
--------------------------------------------------------------------------------
arpt_killc 20 20 20 0 255 minute D
--------------------------------------------------------------------------------
<LOB> - <Project Name>
db9d4f33214f80b60cec0ca3e590df64
Printed 3/17/2021
db9d4f33214f80b60cec0ca3e590df64
Page 339 of 362
© Livingston International 2014 - All Rights Reserved CONFIDENTIAL AND PROPRIETARY TO Livingston International
For Internal Use Only - Do Not Duplicate
arptab_bsiz 7 7 7 1 32K-1 bucket_size R
--------------------------------------------------------------------------------
arptab_nb 149 149 149 1 32K-1 buckets R
--------------------------------------------------------------------------------
dgd_packets_lost 3 3 3 1 32K-1 numeric D
--------------------------------------------------------------------------------
dgd_ping_time 5 5 5 1 8E-1 second D
--------------------------------------------------------------------------------
dgd_retry_time 5 5 5 1 32K-1 numeric D
--------------------------------------------------------------------------------
ndp_mmaxtries 3 3 3 0 8E-1 numeric D
--------------------------------------------------------------------------------
ndp_umaxtries 3 3 3 0 8E-1 numeric D
--------------------------------------------------------------------------------
ndpqsize 50 50 50 1 32K-1 numeric D
--------------------------------------------------------------------------------
ndpt_down 3 3 3 1 8E-1 halfsecond D
--------------------------------------------------------------------------------
ndpt_keep 120 120 120 1 8E-1 halfsecond D
--------------------------------------------------------------------------------
ndpt_probe 5 5 5 1 4G-1 halfsecond D
--------------------------------------------------------------------------------
ndpt_reachable 30 30 30 1 4G-1 halfsecond D
--------------------------------------------------------------------------------
ndpt_retrans 1 1 1 1 4G-1 halfsecond D
--------------------------------------------------------------------------------
passive_dgd 0 0 0 0 1 boolean D
--------------------------------------------------------------------------------
rfc1122addrchk 0 0 0 0 1 boolean D
--------------------------------------------------------------------------------
Stream Header Tunable Parameters
--------------------------------------------------------------------------------
NAME CUR DEF BOOT MIN MAX UNIT TYPE
DEPENDENCIES
--------------------------------------------------------------------------------
lowthresh 90 90 90 0 100 %_of_thewall D
--------------------------------------------------------------------------------
medthresh 95 95 95 0 100 %_of_thewall D
--------------------------------------------------------------------------------
nstrpush 8 8 8 8 32K-1 numeric S
--------------------------------------------------------------------------------
psebufcalls 20 20 20 20 8E-1 numeric I
--------------------------------------------------------------------------------
psecache 1 1 1 0 1 boolean D
--------------------------------------------------------------------------------
psetimers 20 20 20 20 8E-1 numeric I
--------------------------------------------------------------------------------
strctlsz 1K 1K 1K 0 32K-1 byte D
--------------------------------------------------------------------------------
strmsgsz 0 0 0 0 32K-1 byte D
--------------------------------------------------------------------------------
strthresh 85 85 85 0 100 %_of_thewall D
--------------------------------------------------------------------------------
strturncnt 15 15 15 1 8E-1 numeric D
--------------------------------------------------------------------------------
Other Network Tunable Parameters
<LOB> - <Project Name>
db9d4f33214f80b60cec0ca3e590df64
Printed 3/17/2021
db9d4f33214f80b60cec0ca3e590df64
Page 340 of 362
© Livingston International 2014 - All Rights Reserved CONFIDENTIAL AND PROPRIETARY TO Livingston International
For Internal Use Only - Do Not Duplicate
--------------------------------------------------------------------------------
NAME CUR DEF BOOT MIN MAX UNIT TYPE
DEPENDENCIES
--------------------------------------------------------------------------------
bcastping 0 0 0 0 1 boolean D
--------------------------------------------------------------------------------
dgd_flush_cached_route 0 0 0 0 1 boolean D
--------------------------------------------------------------------------------
icmp6_errmsg_rate 10 10 10 1 255 msg/second D
--------------------------------------------------------------------------------
icmpaddressmask 0 0 0 0 1 boolean D
--------------------------------------------------------------------------------
ifsize 256 256 256 8 1K numeric R
--------------------------------------------------------------------------------
igmpv2_deliver 0 0 0 0 1 boolean D
--------------------------------------------------------------------------------
llsleep_timeout 3 3 3 1 2G-1 second D
--------------------------------------------------------------------------------
main_if6 0 0 0 0 32K-1 numeric D
--------------------------------------------------------------------------------
main_site6 0 0 0 0 1 boolean D
--------------------------------------------------------------------------------
maxttl 255 255 255 1 255 second D
--------------------------------------------------------------------------------
mpr_policy 1 1 1 1 6 numeric D
--------------------------------------------------------------------------------
pmtu_default_age 10 10 10 0 32K-1 minute D
--------------------------------------------------------------------------------
pmtu_expire 10 10 10 0 32K-1 minute D
--------------------------------------------------------------------------------
pmtu_rediscover_interval 30 30 30 0 32K-1 minute D
--------------------------------------------------------------------------------
route_expire 1 1 1 0 1 boolean D
--------------------------------------------------------------------------------
routerevalidate 0 0 0 0 1 boolean D
--------------------------------------------------------------------------------
rtentry_lock_complex 0 0 0 0 1 boolean R
--------------------------------------------------------------------------------
site6_index 0 0 0 0 32K-1 numeric D
--------------------------------------------------------------------------------
tcp_pmtu_discover 1 1 1 0 1 boolean D
--------------------------------------------------------------------------------
udp_pmtu_discover 1 1 1 0 1 boolean D
--------------------------------------------------------------------------------
n/a means parameter not supported by the current platform or kernel
Parameter types:
S = Static: cannot be changed
D = Dynamic: can be freely changed
B = Bosboot: can only be changed using bosboot and reboot
R = Reboot: can only be changed during reboot
C = Connect: changes are only effective for future socket connections
M = Mount: changes are only effective for future mountings
I = Incremental: can only be incremented
Value conventions:
<LOB> - <Project Name>
db9d4f33214f80b60cec0ca3e590df64
Printed 3/17/2021
db9d4f33214f80b60cec0ca3e590df64
Page 341 of 362
© Livingston International 2014 - All Rights Reserved CONFIDENTIAL AND PROPRIETARY TO Livingston International
For Internal Use Only - Do Not Duplicate
K = Kilo: 2^10 G = Giga: 2^30 P = Peta: 2^50
M = Mega: 2^20 T = Tera: 2^40 E = Exa: 2^60
# no -o tcp_keepidle=3600
Locus_To_CDH_File_Mapping
LOCUS
RECORD
TYPE
TABLE NAME(CCIH
or CCID)
INFORMIX FIELD
NAME
INFOR
MIX
FIELD
LENGT
H
LOC
US
OFF
SET
LOCUS
FIELD
TYPE
FIELD
FORMAT
(COBOL
PICTURE
CLAUSE)
Key
Type -
Internal
to "C"
Code
(Tcl.c)
ACCO
CLIENT_INVOICE
LIIClientNo
3
6
K
9(6)
LK
ACCO
CLIENT_INVOICE
LIIAccountNo
9
3
K
9(3)
LK
ACCO
CLIENT_INVOICE
LIIBrchNo
20
3
K
9(3)
LK
ACCO
CLIENT_INVOICE
LIIRefNo
23
6
K
9(6)
LK
ACCO
CLIENT_INVOICE
LIIRefText
29
3
K
X(3)
LK
ACCO
CLIENT_INVOICE
ItemStatus
504
3
N
X(1)
D
ACCO
CLIENT_INVOICE
ItemTypeCode
29
2
A
X(2)
D
ACCO
CLIENT_INVOICE
ItemDate
12
8
D
9(8)
D
ACCO
CLIENT_INVOICE
TotDuty
377
13
C
9(9).9(2)-
D
ACCO
CLIENT_INVOICE
TotAmt
92
13
C
9(9).9(2)-
D
ACCO
CLIENT_INVOICE
Balance
105
13
C
9(9).9(2)-
D
ACCO
CLIENT_INVOICE
TransactionNo
67
15
A
X(15)
D
ACCO
CLIENT_INVOICE
ItemFormCode
501
3
A
X(3)
D
ACCO
CLIENT_INVOICE
RecordLength
0
534
RL
NA
NA
ACCO
CLIENT_INVOICE
LocusKey
0
0
LK
NA
NA
ACCO
CLIENT_INVOICE
DebugCounters
0
0
DC
NA
NA
ACUS
LII_CLIENT
LIIClientNo
3
6
K
9(6)
LK
ACUS
LII_CLIENT
LastPaymntDate
12
8
D
9(8)
D
ACUS
LII_CLIENT
LastChequeAmt
20
13
C
9(9).9(2)-
D
ACUS
LII_CLIENT
Terms
41
2
N
9(2)
D
ACUS
LII_CLIENT
RecordLength
0
42
RL
NA
NA
ACUS
LII_CLIENT
LocusKey
0
0
LK
NA
NA
ACUS
LII_CLIENT
DebugCounters
0
0
DC
NA
NA
B2CL
CLAIM_LOG
ClaimLogIID
0
0
N
NA
SQL
B2CL
CLAIM_LOG
LIIClientNo
3
6
K
9(6)
K
B2CL
CLAIM_LOG
B3TransNo
17
9
K
X(9)
K
B2CL
CLAIM_LOG
B3AcctSecurNo
12
5
K
X(5)
K
B2CL
CLAIM_LOG
B3TransSeqNo
26
2
N
9(2)
K
B2CL
CLAIM_LOG
ClaimRefNo
28
14
A
X(14)
D
B2CL
CLAIM_LOG
B2BrchNo
120
3
N
9(3)
D
B2CL
CLAIM_LOG
B2RefNo
123
6
N
9(6)
D
B2CL
CLAIM_LOG
ClaimAmount
104
13
C
9(9).9(2)-
D
B2CL
CLAIM_LOG
ClaimStatus
117
3
N
X(3)
D
B2CL
CLAIM_LOG
ClaimCode
67
2
A
X(2)
D
<LOB> - <Project Name>
db9d4f33214f80b60cec0ca3e590df64
Printed 3/17/2021
db9d4f33214f80b60cec0ca3e590df64
Page 343 of 362
© Livingston International 2014 - All Rights Reserved CONFIDENTIAL AND PROPRIETARY TO Livingston International
For Internal Use Only - Do Not Duplicate
B2CL
CLAIM_LOG
CustomsDesn
66
1
A
X(1)
D
B2CL
CLAIM_LOG
ReceivedDate
129
8
D
9(8)
D
B2CL
CLAIM_LOG
Submitdate
42
8
D
9(8)
D
B2CL
CLAIM_LOG
StampedCopyda
te
50
8
D
9(8)
D
B2CL
CLAIM_LOG
CustomsDesnDa
te
58
8
D
9(8)
D
B2CL
CLAIM_LOG
ClaimVendorNa
me
69
35
A
X(35)
D
B2CL
CLAIM_LOG
RecordLength
0
136
RL
NA
NA
B2CL
CLAIM_LOG
LocusKey
0
0
LK
NA
NA
B2CL
CLAIM_LOG
DebugCounters
0
0
DC
NA
NA
B2DA
AS_ACCOUNTED
AsAcctIID
0
0
N
NA
SQL
B2DA
AS_ACCOUNTED
ClaimLogIID
0
0
N
NA
LOO
B2DA
AS_ACCOUNTED
B2SubHdrNo
13
2
N
9(2)
K
B2DA
AS_ACCOUNTED
B3LineNo
16
3
N
9(3)
K
B2DA
AS_ACCOUNTED
B2LineNo
19
3
N
9(3)
K
B2DA
AS_ACCOUNTED
HSNo
86
10
A
9(10)
D
B2DA
AS_ACCOUNTED
B3Description
33
53
A
X(53)
D
B2DA
AS_ACCOUNTED
B2BrchNo
4
3
N
9(3)
SK
B2DA
AS_ACCOUNTED
B2RefNo
7
6
N
9(6)
SK
B2DA
AS_ACCOUNTED
RecordLength
0
96
RL
NA
NA
B2DA
AS_ACCOUNTED
LocusKey
0
0
LK
NA
NA
B2DA
AS_ACCOUNTED
DebugCounters
0
0
DC
NA
NA
B2DC
AS_CLAIMED
AsClaimedIID
0
0
N
NA
SQL
B2DC
AS_CLAIMED
ClaimLogIID
0
0
N
NA
LOO
B2DC
AS_CLAIMED
B2SubHdrNo
13
2
N
9(2)
K
B2DC
AS_CLAIMED
B3LineNo
16
3
N
9(3)
K
B2DC
AS_CLAIMED
B2LineNo
19
3
N
9(3)
K
B2DC
AS_CLAIMED
HSNo
86
10
A
9(10)
D
B2DC
AS_CLAIMED
B3Description
33
53
A
X(53)
D
B2DC
AS_CLAIMED
B2BrchNo
4
3
N
9(3)
SK
B2DC
AS_CLAIMED
B2RefNo
7
6
N
9(3)
SK
B2DC
AS_CLAIMED
RecordLength
0
96
RL
NA
NA
B2DC
AS_CLAIMED
LocusKey
0
0
LK
NA
NA
B2DC
AS_CLAIMED
DebugCounters
0
0
DC
NA
NA
B3BD
B3B
B3BIID
0
0
N
NA
SQL
B3BD
B3B
B3IID
0
0
N
NA
LOO
B3BD
B3B
CCDSeqNo
12
3
N
9(3)
D
B3BD
B3B
CargCntrlNo
15
25
A
X(25)
D
B3BD
B3B
Quantity
40
8
N
9(8)
D
<LOB> - <Project Name>
db9d4f33214f80b60cec0ca3e590df64
Printed 3/17/2021
db9d4f33214f80b60cec0ca3e590df64
Page 344 of 362
© Livingston International 2014 - All Rights Reserved CONFIDENTIAL AND PROPRIETARY TO Livingston International
For Internal Use Only - Do Not Duplicate
B3BD
B3B
LIIBrchNo
3
3
N
9(3)
SK
B3BD
B3B
LIIRefNo
6
6
N
9(3)
SK
B3BD
B3B
RecordLength
0
61
RL
NA
NA
B3BD
B3B
LocusKey
0
0
LK
NA
NA
B3BD
B3B
DebugCounters
0
0
DC
NA
NA
B3BZ
B3B
B3Iid
0
0
N
NA
LOO
B3BZ
B3B
LiiBrchNo
3
3
N
NA
SK
B3BZ
B3B
LiiRefNo
6
6
N
NA
SK
B3BZ
B3B
RecordLength
0
61
RL
NA
NA
B3BZ
B3B
LocusKey
0
0
LK
NA
NA
B3BZ
B3B
DebugCounters
0
0
DC
NA
NA
B3EH
B3
B3IID
0
0
N
NA
SQL
B3EH
B3
LIIBrchNo
4
3
N
9(3)
SK
B3EH
B3
LIIRefNo
7
6
N
9(6)
SK
B3EH
B3
LIIClientNo
51
6
N
9(6)
D
B3EH
B3
LIIAccountNo
57
3
N
9(3)
D
B3EH
B3
AcctSecuNo
37
5
N
X(5)
D
B3EH
B3
B3Type
23
2
A
X(2)
D
B3EH
B3
CargoCntrlNo
101
25
A
X(25)
D
B3EH
B3
CarrierCode
1142
4
A
X(4)
D
B3EH
B3
CreateDate
25
12
DT4
9(12)
D
B3EH
B3
CustOffc
134
3
LZ4/N
Z
9(3)
D
B3EH
B3
K84Date
867
8
D
9(8)
D
B3EH
B3
ModeTransp
1129
1
NZ
9(1)
D
B3EH
B3
PortUnlading
1130
4
NZ
9(4)
D
B3EH
B3
RelDate
1070
12
DT4
9(12)
D
B3EH
B3
Status
925
3
N
9(3)
D
B3EH
B3
TotB3Duty
485
13
C
9(9).9(2)-
D
B3EH
B3
TotB3ExcTax
511
13
C
9(9).9(2)-
D
B3EH
B3
TotB3GST
498
13
C
9(9).9(2)-
D
B3EH
B3
TotB3SIMA
1042
15
C
9(11).9(2)-
D
B3EH
B3
TotB3VFD
549
15
C
9(11).9(2)-
D
B3EH
B3
TransNo
42
9
N
X(9)
D
B3EH
B3
Weight
1112
7
N
9(7)
D
B3EH
B3
PurchaseOrder1
211
15
A
X(15)
D
B3EH
B3
PurchaseOrder2
226
15
A
X(15)
D
B3EH
B3
ShipVia
163
18
A
X(18)
D
B3EH
B3
LocationOfGood
s
146
17
A
X(17)
D
B3EH
B3
VendorName
76
25
A
X(25)
D
<LOB> - <Project Name>
db9d4f33214f80b60cec0ca3e590df64
Printed 3/17/2021
db9d4f33214f80b60cec0ca3e590df64
Page 345 of 362
© Livingston International 2014 - All Rights Reserved CONFIDENTIAL AND PROPRIETARY TO Livingston International
For Internal Use Only - Do Not Duplicate
B3EH
B3
VendorState
1099
3
A
X(3)
D
B3EH
B3
VendorZip
1102
5
NZ
9(5)
D
B3EH
B3
Freight
1134
8
C
9(8)
D
B3EH
B3
USPortExit
1107
4
A
X(4)
D
B3EH
B3
BillOfLading
181
10
A
X(10)
D
B3EH
B3
CargCntrlQty
1091
8
N
9(8)
D
B3EH
B3
ApprovedDate
1307
8
D
9(8)
D
B3EH
B3
ContainerNo
191
20
A
X(20)
D
B3EH
B3
SBRNNo
1292
15
A
X(15)
D
B3EH
B3
CCNQty
1091
8
N
9(8)
D
B3EH
B3
CCINumLines
575
5
N
9(5)
D
B3EH
B3
InvoiceQty
126
8
N
9(8)
D
B3EH
B3
WarehouseNum
1038
3
N
9(3)
D
B3EH
B3
EntName
306
35
D
A
B3EH
B3
EntAddr1
341
35
A
X(35)
D
B3EH
B3
EntAddr2
376
35
A
X(35)
D
B3EH
B3
EntAddr3
411
35
A
X(35)
D
B3EH
B3
EntAddr4
446
30
A
X(30)
D
B3EH
B3
EntPostCd
476
9
A
X(9)
D
B3EH
B3
StatusDate
928
12
DT4
9(12)
D
B3EH
B3
RecordLength
0
131
5
RL
NA
NA
B3EH
B3
LocusKey
0
0
LK
NA
NA
B3EH
B3
DebugCounters
0
0
DC
NA
NA
B3EH
STATUS_HISTORY
B3IID
0
0
N
NA
SQL
B3EH
STATUS_HISTORY
Status
925
3
N
9(3)
D
B3EH
STATUS_HISTORY
StatusDate
928
12
DT4
9(12)
D
B3EH
STATUS_HISTORY
RecordLength
0
121
RL
NA
NA
B3EH
STATUS_HISTORY
LocusKey
0
0
LK
NA
NA
B3EH
STATUS_HISTORY
DebugCounters
0
0
DC
NA
NA
B3EH
IP_RMD
IPRMDIid
0
0
N
NA
SQL
B3EH
IP_RMD
AcctSecurNo
37
5
N
X(5)
D
B3EH
IP_RMD
TransNo
42
9
N
X(9)
D
B3EH
IP_RMD
ToSiteID
940
9
N
9(9)
D
B3EH
IP_RMD
CargCntrlNo
101
25
A
X(25)
D
B3EH
IP_RMD
CargCntrlQty
1091
8
N
9(8)
D
B3EH
IP_RMD
CarrierCode
1142
4
A
X(4)
D
B3EH
IP_RMD
CustOff
134
3
LZ4/N
Z
9(3)
D
B3EH
IP_RMD
VendorName
76
25
A
X(25)
D
B3EH
IP_RMD
PortUnlading
1130
4
NZ
9(4)
D
<LOB> - <Project Name>
db9d4f33214f80b60cec0ca3e590df64
Printed 3/17/2021
db9d4f33214f80b60cec0ca3e590df64
Page 346 of 362
© Livingston International 2014 - All Rights Reserved CONFIDENTIAL AND PROPRIETARY TO Livingston International
For Internal Use Only - Do Not Duplicate
B3EH
IP_RMD
PurchaseOrder1
211
15
A
X(15)
D
B3EH
IP_RMD
PurchaseOrder2
226
15
A
X(15)
D
B3EH
IP_RMD
RelDate
1070
12
DT4
9(12)
D
B3EH
IP_RMD
ShipVia
163
18
A
X(18)
D
B3EH
IP_RMD
Weight
1112
7
N
9(7)
D
B3EH
IP_RMD
USPortExit
1107
4
A
X(4)
D
B3EH
IP_RMD
CreateDate
25
12
DT4
9(12)
D
B3EH
IP_RMD
LIIBrchNo
4
3
N
9(3)
D
B3EH
IP_RMD
LIIRefNo
7
6
N
9(6)
D
B3EH
IP_RMD
B3Type
23
2
A
X(2)
D
B3EH
IP_RMD
ModeTransp
1129
1
NZ
9(1)
D
B3EH
IP_RMD
CtryOrigin
141
3
A
X(3)
D
B3EH
IP_RMD
PlaceExp
137
4
A
X(4)
D
B3EH
IP_RMD
ShipDate
1022
8
D
9(8)
D
B3EH
IP_RMD
FromSiteId
0
0
N
240898001
CON
B3EH
IP_RMD
IPStatus
0
0
A
NA
NA
B3EH
IP_RMD
RecordLength
0
129
9
RL
NA
NA
B3EH
IP_RMD
LocusKey
0
0
LK
NA
NA
B3EH
IP_RMD
DebugCounters
0
0
DC
NA
NA
CARR
CARRIER
CarrierCode
5
4
A
X(4)
LK
CARR
CARRIER
Description
26
35
A
X(35)
D
CARR
CARRIER
RecordLength
0
510
RL
NA
NA
CARR
CARRIER
LocusKey
0
0
LK
NA
NA
CARR
CARRIER
DebugCounters
0
0
DC
NA
NA
CBBO
BRANCH
LIIBrchNo
7
3
K
9(3)
LK
CBBO
BRANCH
Description
26
35
A
X(35)
D
CBBO
BRANCH
RecordLength
0
510
RL
NA
NA
CBBO
BRANCH
LocusKey
0
0
LK
NA
NA
CBBO
BRANCH
DebugCounters
0
0
DC
NA
NA
CBCN
CTRY_CODE
CtryCode
5
4
K
X(4)
LK
CBCN
CTRY_CODE
Description
26
30
A
X(30)
D
CBCN
CTRY_CODE
RecordLength
0
510
RL
NA
NA
CBCN
CTRY_CODE
LocusKey
0
0
LK
NA
NA
CBCN
CTRY_CODE
DebugCounters
0
0
DC
NA
NA
CBCO
CANCT_OFF
CanctOffCode
5
3
LZ4/K
9(3)
LK
CBCO
CANCT_OFF
Description
26
35
A
X(35)
D
CBCO
CANCT_OFF
RecordLength
0
510
RL
NA
NA
CBCO
CANCT_OFF
LocusKey
0
0
LK
NA
NA
CBCO
CANCT_OFF
DebugCounters
0
0
DC
NA
NA
CBTT
STRINGTABLE
StrCode
5
2
A
9(2)
K
<LOB> - <Project Name>
db9d4f33214f80b60cec0ca3e590df64
Printed 3/17/2021
db9d4f33214f80b60cec0ca3e590df64
Page 347 of 362
© Livingston International 2014 - All Rights Reserved CONFIDENTIAL AND PROPRIETARY TO Livingston International
For Internal Use Only - Do Not Duplicate
CBTT
STRINGTABLE
Description
56
30
A
X(30)
D
CBTT
STRINGTABLE
StrType
0
3
A
TRB
CON
CBTT
STRINGTABLE
RecordLength
0
510
RL
NA
NA
CBTT
STRINGTABLE
LocusKey
0
0
LK
NA
NA
CBTT
STRINGTABLE
DebugCounters
0
0
DC
NA
NA
CCID
IP_CCI_LINE
CCILineIID
0
0
N
NA
SQL
CCID
IP_CCI_LINE
CCIIID
0
0
N
NA
LOO
CCID
IP_CCI_LINE
CCIPageNo
34
6
N
9(6)
D
CCID
IP_CCI_LINE
CCILineNo
40
5
N
9(5)
D
CCID
IP_CCI_LINE
CtryOrigin
480
3
A
X(3)
D
CCID
IP_CCI_LINE
CurrCode
78
3
A
X(3)
D
CCID
IP_CCI_LINE
PartDesc
107
58
A
X(58)
D
CCID
IP_CCI_LINE
DiscntTypeDesc
392
40
A
X(40)
D
CCID
IP_CCI_LINE
HSNo
519
10
A
X(10)
D
CCID
IP_CCI_LINE
ItemDiscnt
432
8
CN/P2
9(5).9(2)
D
CCID
IP_CCI_LINE
PartKeywrd
82
25
A
X(25)
D
CCID
IP_CCI_LINE
Quantity
353
14
P2
9(11).9(2)
D
CCID
IP_CCI_LINE
RevTotVal
441
14
CN/P4
9(9).9(4)
D
CCID
IP_CCI_LINE
UnitMeas
367
5
A
X(5)
D
CCID
IP_CCI_LINE
UnitPrice
339
14
P4
9(9).9(4)
D
CCID
IP_CCI_LINE
NoPacks
529
2
CN/N
9(2)
D
CCID
IP_CCI_LINE
RecordLength
0
550
RL
NA
NA
CCID
IP_CCI_LINE
LocusKey
0
0
LK
NA
NA
CCID
IP_CCI_LINE
DebugCounters
0
0
DC
NA
NA
CLAC
LII_ACCOUNT
LIIClientNo
4
6
N
9(6)
K
CLAC
LII_ACCOUNT
LIIAccountNo
10
3
N
9(3)
K
CLAC
LII_ACCOUNT
Name
23
35
A
X(35)
D
CLAC
LII_ACCOUNT
SiteID
14
9
N
9(9)
D
CLAC
LII_ACCOUNT
PartnerBFlag
13
1
A
X(1)
D
CLAC
LII_ACCOUNT
RecordLength
0
122
RL
NA
NA
CLAC
LII_ACCOUNT
LocusKey
0
0
LK
NA
NA
CLAC
LII_ACCOUNT
DebugCounters
0
0
DC
NA
NA
CLCO
ACCOUNT_CONTACT
AcctContIID
0
0
NA
NA
SQL
CLCO
ACCOUNT_CONTACT
EmployeeNo
10
5
N
9(5)
D
CLCO
ACCOUNT_CONTACT
LIIClientNo
1
6
N
9(6)
D
CLCO
ACCOUNT_CONTACT
LIIAccountNo
7
3
N
9(3)
D
CLCO
ACCOUNT_CONTACT
RecordLength
0
44
RL
NA
NA
CLCO
ACCOUNT_CONTACT
LocusKey
0
0
LK
NA
NA
CLCO
ACCOUNT_CONTACT
DebugCounters
0
0
DC
NA
NA
CLIE
LII_CLIENT
LIIClientNo
4
6
N
9(6)
K
<LOB> - <Project Name>
db9d4f33214f80b60cec0ca3e590df64
Printed 3/17/2021
db9d4f33214f80b60cec0ca3e590df64
Page 348 of 362
© Livingston International 2014 - All Rights Reserved CONFIDENTIAL AND PROPRIETARY TO Livingston International
For Internal Use Only - Do Not Duplicate
CLIE
LII_CLIENT
Name
23
35
A
X(35)
D
CLIE
LII_CLIENT
SiteID
14
9
N
9(9)
D
CLIE
LII_CLIENT
PartnerBFlag
13
1
A
X(1)
D
CLIE
LII_CLIENT
LastPaymntDate
0
0
D
NA
D
CLIE
LII_CLIENT
LastChequeAmt
0
0
N
NA
D
CLIE
LII_CLIENT
Terms
0
0
N
NA
D
CLIE
LII_CLIENT
RecordLength
0
122
RL
NA
NA
CLIE
LII_CLIENT
LocusKey
0
0
LK
NA
NA
CLIE
LII_CLIENT
DebugCounters
0
0
DC
NA
NA
EMPL
LII_CONTACT
EmployeeNo
1
5
N
9(5)
K
EMPL
LII_CONTACT
ContactCode
41
3
A
9(3)
D
EMPL
LII_CONTACT
LastName
6
20
A
X(20)
D
EMPL
LII_CONTACT
FirstName
26
15
A
X(15)
D
EMPL
LII_CONTACT
Location
44
25
A
X(25)
D
EMPL
LII_CONTACT
PhoneNo
69
10
A
9(10)
D
EMPL
LII_CONTACT
PhoneExt
79
4
NZ
9(4)
D
EMPL
LII_CONTACT
FaxNo
83
10
NZ
9(10)
D
EMPL
LII_CONTACT
InActiveFlag
93
1
A
X(1)
D
EMPL
LII_CONTACT
RecordLength
0
122
RL
NA
NA
EMPL
LII_CONTACT
LocusKey
0
0
LK
NA
NA
EMPL
LII_CONTACT
DebugCounters
0
0
DC
NA
NA
EMTI
CONTACT_TYPE
ContactType
1
3
N
9(3)
K
EMTI
CONTACT_TYPE
Description
4
25
A
X(25)
D
EMTI
CONTACT_TYPE
RecordLength
0
61
RL
NA
NA
EMTI
CONTACT_TYPE
LocusKey
0
0
LK
NA
NA
EMTI
CONTACT_TYPE
DebugCounters
0
0
DC
NA
NA
CCIH
IP_CCI
CCIIID
7
25
N
NA
SQL
CCIH
IP_CCI
CCIIID
51
25
N
NA
DUP
CCIH
IP_CCI
ToSiteId
1
6
N
9(6)
D
CCIH
IP_CCI
RefNo
2344
9
N
9(9)
D
CCIH
IP_CCI
CommerInvNo
2287
20
A
X(20)
D
CCIH
IP_CCI
CondSale
3993
35
A
X(35)
D
CCIH
IP_CCI
CostNotIncl
2267
9
CN/P2
9(6).9(2)
D
CCIH
IP_CCI
DeptRulingDate
2011
15
A
X(15)
D
CCIH
IP_CCI
DeptRulingNo
2091
20
A
X(20)
D
CCIH
IP_CCI
EntryTransShip
434
4
A
X(4)
D
CCIH
IP_CCI
ExpNotIncl
2276
9
CN/P2
9(6).9(2)
D
CCIH
IP_CCI
InclCost
2240
9
CN/P2
9(6).9(2)
D
CCIH
IP_CCI
InclExp
2249
9
CN/P2
9(6).9(2)
D
CCIH
IP_CCI
InclTrans
2231
9
CN/P2
9(6).9(2)
D
<LOB> - <Project Name>
db9d4f33214f80b60cec0ca3e590df64
Printed 3/17/2021
db9d4f33214f80b60cec0ca3e590df64
Page 349 of 362
© Livingston International 2014 - All Rights Reserved CONFIDENTIAL AND PROPRIETARY TO Livingston International
For Internal Use Only - Do Not Duplicate
CCIH
IP_CCI
InvTot
3816
12
P2
9(9).9(2)
D
CCIH
IP_CCI
OtherCCIRef
441
35
A
X(35)
D
CCIH
IP_CCI
OtherNotes
2307
75
A
X(75)
D
CCIH
IP_CCI
PurchOrderNo
3840
20
A
X(20)
D
CCIH
IP_CCI
PurchOrderRef
3860
20
A
X(20)
D
CCIH
IP_CCI
PurchSupply
2286
1
A
X(1)
D
CCIH
IP_CCI
RoyaltyProceeds
2285
1
A
X(1)
D
CCIH
IP_CCI
ShipDate
566
8
A
9(8)
D
CCIH
IP_CCI
TermsPaymnt
2151
35
A
X(35)
D
CCIH
IP_CCI
TranspNotIncl
2258
9
CN/P2
9(6).9(2)
D
CCIH
IP_CCI
UnitMeasNet
2126
5
A
X(5)
D
CCIH
IP_CCI
WayBill
546
20
A
X(20)
D
CCIH
IP_CCI
WeightGross
2141
10
P2
9(7).9(2)
D
CCIH
IP_CCI
WeightNet
2131
10
P2
9(7).9(2)
D
CCIH
IP_CCI
ConsigneeName
574
36
A
X(36)
D
CCIH
IP_CCI
ConsigneeAddre
ss1
610
36
A
X(36)
D
CCIH
IP_CCI
ConsigneeAddre
ss2
646
36
A
X(36)
D
CCIH
IP_CCI
ConsigneeAddre
ss3
682
36
A
X(36)
D
CCIH
IP_CCI
ConsigneeCity
718
36
A
X(36)
D
CCIH
IP_CCI
ConsigneeProvin
ce
754
36
A
X(36)
D
CCIH
IP_CCI
ConsigneeCount
ry
790
36
A
X(36)
D
CCIH
IP_CCI
ConsigneePostC
ode
826
36
A
X(36)
D
CCIH
IP_CCI
CurrCodeDesc
2221
10
A
X(10)
D
CCIH
IP_CCI
DirectShipLocati
on
3959
25
A
X(25)
D
CCIH
IP_CCI
ExporterName
1443
36
A
X(36)
D
CCIH
IP_CCI
ExporterAddress
1
1479
36
A
X(36)
D
CCIH
IP_CCI
ExporterAddress
2
1515
36
A
X(36)
D
CCIH
IP_CCI
ExporterAddress
3
1551
36
A
X(36)
D
CCIH
IP_CCI
ExporterCity
1587
36
A
X(36)
D
CCIH
IP_CCI
ExportProvince
1623
36
A
X(36)
D
CCIH
IP_CCI
ExporterCountry
1659
36
A
X(36)
D
CCIH
IP_CCI
ExporterPostCod
1695
36
A
X(36)
D
<LOB> - <Project Name>
db9d4f33214f80b60cec0ca3e590df64
Printed 3/17/2021
db9d4f33214f80b60cec0ca3e590df64
Page 350 of 362
© Livingston International 2014 - All Rights Reserved CONFIDENTIAL AND PROPRIETARY TO Livingston International
For Internal Use Only - Do Not Duplicate
e
CCIH
IP_CCI
OriginatorName
1767
36
A
X(36)
D
CCIH
IP_CCI
OriginatorAddre
ss1
1803
36
A
X(36)
D
CCIH
IP_CCI
OriginatorAddre
ss2
1839
36
A
X(36)
D
CCIH
IP_CCI
OriginatorAddre
ss3
1875
36
A
X(36)
D
CCIH
IP_CCI
OriginatorCity
1911
36
A
X(36)
D
CCIH
IP_CCI
OriginatorProvin
ce
1947
36
A
X(36)
D
CCIH
IP_CCI
OriginatorCount
ry
1983
36
A
X(36)
D
CCIH
IP_CCI
OriginatorPostC
ode
2019
36
A
X(36)
D
CCIH
IP_CCI
PurchaserName
1119
36
A
X(36)
D
CCIH
IP_CCI
PurchaserAddre
ss1
1155
36
A
X(36)
D
CCIH
IP_CCI
PurchaserAddre
ss2
1191
36
A
X(36)
D
CCIH
IP_CCI
PurchaserAddre
ss3
1227
36
A
X(36)
D
CCIH
IP_CCI
PurchaserCity
1263
36
A
X(36)
D
CCIH
IP_CCI
PurchaserProvin
ce
1299
36
A
X(36)
D
CCIH
IP_CCI
PurchaserCountr
y
1335
36
A
X(36)
D
CCIH
IP_CCI
PurchaserPostCo
de
1371
36
A
X(36)
D
CCIH
IP_CCI
VendName
90
36
A
X(36)
D
CCIH
IP_CCI
VendorAddress1
126
36
A
X(36)
D
CCIH
IP_CCI
VendorAddress2
162
36
A
X(36)
D
CCIH
IP_CCI
VendorAddress3
198
36
A
X(36)
D
CCIH
IP_CCI
VendorCity
234
36
A
X(36)
D
CCIH
IP_CCI
VendorProvince
270
36
A
X(36)
D
CCIH
IP_CCI
VendorCountry
306
36
A
X(36)
D
CCIH
IP_CCI
VendorPostCode
342
36
A
X(36)
D
CCIH
IP_CCI
VendorStateCod
e
414
3
A
X(3)
D
CCIH
IP_CCI
VendorZipCode
417
5
N
9(5)
D
CCIH
IP_CCI
IPStatus
4040
1
A
NA
D
CCIH
IP_CCI
FromSiteId
2332
9
N
NA
D
CCIH
IP_CCI
CCIExpenseFlag
0
0
A
NA
D
<LOB> - <Project Name>
db9d4f33214f80b60cec0ca3e590df64
Printed 3/17/2021
db9d4f33214f80b60cec0ca3e590df64
Page 351 of 362
© Livingston International 2014 - All Rights Reserved CONFIDENTIAL AND PROPRIETARY TO Livingston International
For Internal Use Only - Do Not Duplicate
CCIH
IP_CCI
CommerInvFlag
0
0
A
NA
D
CCIH
IP_CCI
CreateDate
0
0
D
NA
D
CCIH
IP_CCI
CreateUserID
0
0
N
NA
D
CCIH
IP_CCI
CurrCode
0
0
N
NA
D
CCIH
IP_CCI
DirectshipLoc
0
0
N
NA
D
CCIH
IP_CCI
DiscntType
0
0
N
NA
D
CCIH
IP_CCI
Discount
0
0
CN/F
NA
D
CCIH
IP_CCI
InvTotB4Discnt
0
0
F
NA
D
CCIH
IP_CCI
ModeDate
0
0
D
NA
D
CCIH
IP_CCI
ModeTransp
0
0
N
NA
D
CCIH
IP_CCI
ModUserID
0
0
N
NA
D
CCIH
IP_CCI
NRIBroker
0
0
CN/N
NA
D
CCIH
IP_CCI
NRIDuty
0
0
CN/N
NA
D
CCIH
IP_CCI
NRITax
0
0
CN/N
NA
D
CCIH
IP_CCI
NRIInclPmntFlg
0
0
A
NA
D
CCIH
IP_CCI
TransShipFlag
0
0
A
NA
D
CCIH
IP_CCI
UnitMeasGross
0
0
N
NA
D
CCIH
IP_CCI
RecordLength
0
404
0
RL
NA
NA
CCIH
IP_CCI
LocusKey
0
0
LK
NA
NA
CCIH
IP_CCI
DebugCounters
0
0
DC
NA
NA
PB3B
IP_B3B
IPB3BIID
0
0
N
NA
SQL
PB3B
IP_B3B
IPRMDIID
0
0
N
NA
PRV
PB3B
IP_B3B
CargCntrlNo
15
25
A
X(25)
D
PB3B
IP_B3B
Quantity
40
8
F
9(8)
D
PB3B
IP_B3B
RecordLength
0
61
RL
NA
NA
PB3B
IP_B3B
LocusKey
0
0
LK
NA
NA
PB3B
IP_B3B
DebugCounters
0
0
DC
NA
NA
PORE
USPORT_EXIT
PortExit
43
4
K
9(4)
LK
PORE
USPORT_EXIT
Description
1
40
A
X(40)
D
PORE
USPORT_EXIT
RecordLength
0
70
RL
NA
NA
PORE
USPORT_EXIT
LocusKey
0
0
LK
NA
NA
PORE
USPORT_EXIT
DebugCounters
0
0
DC
NA
NA
RECD
B3_RECAP_DETAIL
B3RecapDetIID
0
0
NA
NA
SQL
RECD
B3_RECAP_DETAIL
B3LineIID
0
0
NA
NA
PRV
RECD
B3_RECAP_DETAIL
CCIPageNo
449
4
N
9(4)
D
RECD
B3_RECAP_DETAIL
CCILineNo
454
3
N
9(3)
D
RECD
B3_RECAP_DETAIL
ProductDesc
464
25
N
X(25)
D
RECD
B3_RECAP_DETAIL
UintMeas
642
3
A
X(3)
D
RECD
B3_RECAP_DETAIL
UnitMeasQty
645
11
N
9(7).9(3)
D
RECD
B3_RECAP_DETAIL
Amount
607
14
C
9(11).9(2)
D
<LOB> - <Project Name>
db9d4f33214f80b60cec0ca3e590df64
Printed 3/17/2021
db9d4f33214f80b60cec0ca3e590df64
Page 352 of 362
© Livingston International 2014 - All Rights Reserved CONFIDENTIAL AND PROPRIETARY TO Livingston International
For Internal Use Only - Do Not Duplicate
RECD
B3_RECAP_DETAIL
PercentSplit
1538
6
C
9(3).9(2)
D
RECD
B3_RECAP_DETAIL
DetailPONumbe
r
1610
15
A
X(15)
D
RECD
B3_RECAP_DETAIL
UnitPrice
1574
14
C
9(9).9(4)
D
RECD
B3_RECAP_DETAIL
RecordLength
0
70
RL
NA
NA
RECD
B3_RECAP_DETAIL
LocusKey
0
0
LK
NA
NA
RECD
B3_RECAP_DETAIL
DebugCounters
0
0
DC
NA
NA
RECM
B3_LINE_COMMENT
B3LineComment
IID
0
0
NA
NA
SQL
RECM
B3_LINE_COMMENT
B3LineIID
0
0
NA
NA
PRV
RECM
B3_LINE_COMMENT
Comment1
549
58
A
X(58)
D
RECM
B3_LINE_COMMENT
Comment2
491
58
A
X(58)
D
RECM
B3_LINE_COMMENT
RecordLength
0
70
RL
NA
NA
RECM
B3_LINE_COMMENT
LocusKey
0
0
LK
NA
NA
RECM
B3_LINE_COMMENT
DebugCounters
0
0
DC
NA
NA
RECP
B3_SUBHEADER
B3SubIID
0
0
NA
Retrieve
from
B3_SUBHDR_
IID table and
increment
SQL
RECP
B3_SUBHEADER
B3IID
0
0
NA
N/A - FK
must be
retrieved
from B3
before
inserting sub
header
LOO
RECP
B3_SUBHEADER
B3SubNo
12
3
N
9(3)
D
RECP
B3_SUBHEADER
CtryOrigin
93
3
A
X(3)
D
RECP
B3_SUBHEADER
CurrCode
110
3
A
X(3)
D
RECP
B3_SUBHEADER
PlaceExp
96
4
A
X(4)
D
RECP
B3_SUBHEADER
ShipDate
102
8
D
9(8)
D
RECP
B3_SUBHEADER
TariffTrtmnt
100
2
A
9(2)
D
RECP
B3_SUBHEADER
TimeLim
113
2
N
9(2)
D
RECP
B3_SUBHEADER
TimeLimUnit
115
1
A
X(1)
D
RECP
B3_SUBHEADER
VendorName
60
25
A
X(25)
D
RECP
B3_SUBHEADER
VendorState
85
3
A
X(3)
D
RECP
B3_SUBHEADER
VendorZip
88
5
NZ
9(5)
D
RECP
B3_SUBHEADER
LIIBrchNo
3
3
N
9(3)
SK
RECP
B3_SUBHEADER
LIIRefNo
6
6
N
9(6)
SK
RECP
B3_SUBHEADER
RecordLength
0
170
0
RL
NA
NA
<LOB> - <Project Name>
db9d4f33214f80b60cec0ca3e590df64
Printed 3/17/2021
db9d4f33214f80b60cec0ca3e590df64
Page 353 of 362
© Livingston International 2014 - All Rights Reserved CONFIDENTIAL AND PROPRIETARY TO Livingston International
For Internal Use Only - Do Not Duplicate
RECP
B3_SUBHEADER
LocusKey
0
0
LK
NA
NA
RECP
B3_SUBHEADER
DebugCounters
0
0
DC
NA
NA
RECQ
B3_LINE
B3LineIID
0
0
NA
NA
SQL
RECQ
B3_LINE
B3SubIID
0
0
NA
NA
PRV
RECQ
B3_LINE
B3LineNo
1699
4
N
9(3)
D
RECQ
B3_LINE
AdValDutyRateU
Meas
642
3
A
X(3)
D
RECQ
B3_LINE
AdValRate1
116
6
N
9(3).9(2)
D
RECQ
B3_LINE
ConvToQty1
645
11
N
9(7).9(3)
D
RECQ
B3_LINE
ConvToQty2
659
11
N
9(7).9(3)
D
RECQ
B3_LINE
ConvToQty3
673
11
N
9(7).9(3)
D
RECQ
B3_LINE
ExcDuty
1514
12
N
9(9).9(2)
D
RECQ
B3_LINE
ExcDutyRateUM
eas
670
3
A
X(3)
D
RECQ
B3_LINE
ExcDutyRate
150
10
N
9(3).9(6)
D
RECQ
B3_LINE
ExchgRate
621
9
N
9(2).9(6)
D
RECQ
B3_LINE
ExcTax
1478
12
N
9(9).9(2)
D
RECQ
B3_LINE
ExcTaxRateUMe
as
279
3
A
X(3)
D
RECQ
B3_LINE
ExcTaxRate
263
6
N
9(3).9(2)
D
RECQ
B3_LINE
ExcTaxExmptCo
de
349
2
A
X(2)
D
RECQ
B3_LINE
GST
1466
12
N
9(9).9(2)
D
RECQ
B3_LINE
GSTRate
459
5
N
9(2).9(2)
D
RECQ
B3_LINE
HSNo
35
10
A
9(10)
D
RECQ
B3_LINE
OICSpecialAut
229
16
A
X(16)
D
RECQ
B3_LINE
PartKeywrd
464
25
A
X(25)
D
RECQ
B3_LINE
PartSufx
489
2
N
9(2)
D
RECQ
B3_LINE
PartDesc
549
58
A
X(58)
D
RECQ
B3_LINE
SIMACode
630
2
NZ
9(2)
D
RECQ
B3_LINE
SIMAVal
632
10
N
9(7).9(2)
D
RECQ
B3_LINE
SpcDutyRateUM
eas
656
3
A
X(3)
D
RECQ
B3_LINE
SpcRate
163
10
N
9(3).9(6)
D
RECQ
B3_LINE
TariffCode
45
4
NZ
9(4)
D
RECQ
B3_LINE
VFCC
1321
12
N
9(9).9(2)
D
RECQ
B3_LINE
VFD
1430
12
N
9(9).9(2)
D
RECQ
B3_LINE
VFDCode
291
2
A
9(2)
D
RECQ
B3_LINE
VFT
1454
12
N
9(9).9(2)
D
RECQ
B3_LINE
LineComment
491
58
A
X(58)
D
RECQ
B3_LINE
AdValDuty
1357
12
N
9(9).9(2)
D
<LOB> - <Project Name>
db9d4f33214f80b60cec0ca3e590df64
Printed 3/17/2021
db9d4f33214f80b60cec0ca3e590df64
Page 354 of 362
© Livingston International 2014 - All Rights Reserved CONFIDENTIAL AND PROPRIETARY TO Livingston International
For Internal Use Only - Do Not Duplicate
RECQ
B3_LINE
SpcDuty
1369
12
N
9(9).9(2)
D
RECQ
B3_LINE
TotalDuty
1442
12
N
9(9).9(2)
D
RECQ
B3_LINE
GSTExemptCode
345
2
NZ
9(2)
D
RECQ
B3_LINE
RulingNumber
176
45
A
X(45)
D
RECQ
B3_LINE
TRQNo
351
9
N
9(9)
D
RECQ
B3_LINE
PrevTransNo
791
14
A
X(14)
D
RECQ
B3_LINE
PrevLineNo
805
4
N
9(4)
D
RECQ
B3_LINE
RecordLength
0
175
0
RL
NA
NA
RECQ
B3_LINE
LocusKey
0
0
LK
NA
NA
RECQ
B3_LINE
DebugCounters
0
0
DC
NA
NA
TADU
HS_DUTY_RATE
HSNo
1
10
A
9(8)
AK
TADU
HS_DUTY_RATE
HStariffTrtmnt
11
2
A
X(2)
AK
TADU
HS_DUTY_RATE
EffDate
13
8
D
9(8)
AK
TADU
HS_DUTY_RATE
ExpryDate
21
8
D
9(8)
D
TADU
HS_DUTY_RATE
AdValRate
29
6
F2
9(3).9(2)
D
TADU
HS_DUTY_RATE
MinAmtType
35
1
A
X(1)
D
TADU
HS_DUTY_RATE
MaxAmtType
36
1
A
X(1)
D
TADU
HS_DUTY_RATE
MinAmt
37
10
F6
9(3).9(6)
D
TADU
HS_DUTY_RATE
MaxAmt
47
10
F6
9(3).9(6)
D
TADU
HS_DUTY_RATE
MinAmtUnitMea
s
57
3
A
X(3)
D
TADU
HS_DUTY_RATE
MaxAmtUnitMe
as
60
3
A
X(3)
D
TADU
HS_DUTY_RATE
ExcRate
63
10
F6
9(3).9(6)
D
TADU
HS_DUTY_RATE
ExcUnitMeas
73
3
A
X(3)
D
TADU
HS_DUTY_RATE
SpecRate
76
10
F6
9(3).9(6)
D
TADU
HS_DUTY_RATE
SpecUnitMeas
86
3
A
X(3)
D
TADU
HS_DUTY_RATE
RecordLength
0
121
RL
NA
NA
TADU
HS_DUTY_RATE
LocusKey
0
0
LK
NA
NA
TADU
HS_DUTY_RATE
DebugCounters
0
0
DC
NA
NA
TANX
TARIFF_CODE
TariffCode
1
4
A
9(4)
AK
TANX
TARIFF_CODE
HSTariffTrtmnt
5
2
A
9(2)
AK
TANX
TARIFF_CODE
EffDate
7
8
D
9(8)
AK
TANX
TARIFF_CODE
AdValRate
15
6
F
9(3).9(2)
D
TANX
TARIFF_CODE
MinAmtType
21
1
A
X(1)
D
TANX
TARIFF_CODE
MaxAmttype
22
1
A
X(1)
D
TANX
TARIFF_CODE
MinAmt
23
10
F
9(3).9(6)
D
TANX
TARIFF_CODE
MaxAmt
33
10
F
9(3).9(6)
D
TANX
TARIFF_CODE
MinAmtUnitMea
s
43
3
A
X(3)
D
<LOB> - <Project Name>
db9d4f33214f80b60cec0ca3e590df64
Printed 3/17/2021
db9d4f33214f80b60cec0ca3e590df64
Page 355 of 362
© Livingston International 2014 - All Rights Reserved CONFIDENTIAL AND PROPRIETARY TO Livingston International
For Internal Use Only - Do Not Duplicate
TANX
TARIFF_CODE
MaxAmtUnitMe
as
46
3
A
X(3)
D
TANX
TARIFF_CODE
SpecRate
49
10
F
9(3).9(6)
D
TANX
TARIFF_CODE
SpecUnitMeas
59
3
A
X(3)
D
TANX
TARIFF_CODE
ExpryDate
73
8
D
9(8)
D
TANX
TARIFF_CODE
CreateDate
0
0
D
9(8)
D
TANX
TARIFF_CODE
RecordLength
0
128
RL
NA
NA
TANX
TARIFF_CODE
LocusKey
0
0
LK
NA
NA
TANX
TARIFF_CODE
DebugCounters
0
0
DC
NA
NA
TARF
TARIFF
LIIClientNo
5
6
N
9(6)
K
TARF
TARIFF
VendorName
11
25
A
X(25)
K
TARF
TARIFF
ProductKeyword
36
25
A
X(25)
K
TARF
TARIFF
ProductSufx
61
2
N
9(2)
K
TARF
TARIFF
ApprovalCode
253
1
A
X(1)
D
TARF
TARIFF
B3Description
254
58
A
X(58)
D
TARF
TARIFF
B3RefBrch
350
3
N
9(3)
D
TARF
TARIFF
B3RefNo
353
6
N
9(6)
D
TARF
TARIFF
CreateDate
479
8
D
9(8)
D
TARF
TARIFF
COOIndicator
433
1
A
X(1)
D
TARF
TARIFF
COOExpryDate
434
8
D
9(8)
D
TARF
TARIFF
ExcTaxLicInd
490
1
A
X(1)
D
TARF
TARIFF
GSTExemptCode
487
2
NZ
9(2)
D
TARF
TARIFF
GSTRateCode
489
2
A
9(2)
D
TARF
TARIFF
HSNo
359
10
A
9(10)
D
TARF
TARIFF
LastUsedDate
466
8
D
9(8)
D
TARF
TARIFF
ModDate
402
8
D
9(8)
D
TARF
TARIFF
ModUser
507
12
A
X(12)
D
TARF
TARIFF
OIC
130
16
A
X(16)
D
TARF
TARIFF
OICExpryDate
450
8
D
9(8)
D
TARF
TARIFF
PercentSplit
373
6
N
9(3).9(2)
D
TARF
TARIFF
PlaceExp
63
4
A
X(4)
D
TARF
TARIFF
RemissNo
410
7
NZ
9(7)
D
TARF
TARIFF
RemissExpryDat
e
417
8
D
9(8)
D
TARF
TARIFF
RulingNo
69
45
A
X(45)
D
TARF
TARIFF
RulingExpryDate
114
8
D
9(8)
D
TARF
TARIFF
SpecialInstruct
312
30
A
X(30)
D
TARF
TARIFF
Remarks
146
58
A
X(58)
D
TARF
TARIFF
TariffCode
369
4
NZ
9(4)
D
TARF
TARIFF
TariffTrtmnt
67
2
A
9(2)
D
TARF
TARIFF
VFDCode
204
2
A
9(2)
D
<LOB> - <Project Name>
db9d4f33214f80b60cec0ca3e590df64
Printed 3/17/2021
db9d4f33214f80b60cec0ca3e590df64
Page 356 of 362
© Livingston International 2014 - All Rights Reserved CONFIDENTIAL AND PROPRIETARY TO Livingston International
For Internal Use Only - Do Not Duplicate
TARF
TARIFF
ExcTaxRate
237
5
N
9(2).9(2)
D
TARF
TARIFF
ExcTaxAmt
224
10
N
9(3).9(6)
D
TARF
TARIFF
ExcTaxUnit
234
3
A
X(3)
D
TARF
TARIFF
ExcTaxDeduct
242
6
N
9(3).9(2)
D
TARF
TARIFF
ExcTaxDeductUn
it
248
3
A
X(3)
D
TARF
TARIFF
ExcTaxExmptCo
de
491
2
A
X(2)
D
TARF
TARIFF
ProjectCode
379
5
A
X(5)
D
TARF
TARIFF
BusinessUnitCod
e
384
5
A
X(5)
D
TARF
TARIFF
MaterialClassCo
de
524
3
A
X(3)
D
TARF
TARIFF
CountryOrigin
519
4
A
X(4)
D
TARF
TARIFF
RequirementID
529
8
A
X(8)
D
TARF
TARIFF
Version
537
4
A
X(4)
D
TARF
TARIFF
OGDExtension
541
6
A
X(6)
D
TARF
TARIFF
EndUse
547
3
A
X(3)
D
TARF
TARIFF
Miscellaneous
550
3
A
X(3)
D
TARF
TARIFF
RegType01
553
3
A
X(3)
D
TARF
TARIFF
RecordLength
0
520
RL
NA
NA
TARF
TARIFF
LocusKey
0
0
LK
NA
NA
TARF
TARIFF
DebugCounters
0
0
DC
NA
NA
TAUM
HS_UOM
HSNo
1
10
A
X(10)
AK
TAUM
HS_UOM
EffDate
11
8
D
9(8)
AK
TAUM
HS_UOM
UnitMeas
21
3
A
X(3)
D
TAUM
HS_UOM
ExpryDate
24
8
D
9(8)
D
TAUM
HS_UOM
RecordLength
0
121
RL
NA
NA
TAUM
HS_UOM
LocusKey
0
0
LK
NA
NA
TAUM
HS_UOM
DebugCounters
0
0
DC
NA
NA